Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, IEEE Transactions on Information Theory
The Newton radius of a code is the largest weight of a uniquely correctable error. The covering radius is the largest distance between a vector and the code. Two relations between the Newton radius and the covering radius are given.
Discrete Mathematics, 2001
The Newton radius of a code is the largest weight of a uniquely correctable error. The covering radius is the largest distance between a vector and the closest codeword. A couple of relations involving the Newton and covering radii are discussed.
IEEE Transactions on Information Theory, 2021
Motivated by an application to database linear querying, such as private information-retrieval protocols, we suggest a fundamental property of linear codes-the generalized covering radius. The generalized covering-radius hierarchy of a linear code characterizes the trade-off between storage amount, latency, and access complexity, in such database systems. Several equivalent definitions are provided, showing this as a combinatorial, geometric, and algebraic notion. We derive bounds on the code parameters in relation with the generalized covering radii, study the effect of simple code operations, and describe a connection with generalized Hamming weights.
IEEE Transactions on Information Theory, 1985
IEEE Transactions on Information Theory, 1986
A number of upper and lower bounds are obtained for K( n, R), the minimal number of codewords in any binary code of length n and covering radius R. Several new constructions are used to derive the upper bounds, including an amalgamated direct sum construction for nonlinear codes. This construction works best when applied to normal codes, and we give some new and stronger conditions which imply that a linear code is normal. An upper bound is given for the density of a covering code over any alphabet, and it is shown that K(n + 2,R + 1) 5 K(n, R) holds for sufficiently large n.
SIAM Journal on Algebraic Discrete Methods, 1987
In this two-part paper we introduce the notion of a stable code and give a new upper bound on the normalized covering radius ofa code. The main results are that, for fixed k and large n, the minimal covering radius t[n, k] is realized by a normal code in which all but one of the columns have multiplicity l; hence tin + 2, k] t[n, k] + for sufficiently large n. We also show that codes with n _-< 14, k-< 5 or dmin 5 are normal, and we determine the covering radius of all proper codes of dimension k _-< 5. Examples of abnormal nonlinear codes are given. In Part we investigate the general theory of normalized covering radius, while in Part II [this Journal, 8 (1987), pp. 619-627] we study codes of dimension k-< 5, and normal and abnormal codes.
Advances in Mathematics: Scientific Journal, 2020
In this paper, we determine the covering radius of codes over R = Z 2 R * , where R * = Z 2 + uZ 2 , u 2 = 0. We have defined the block repetition codes over R and obtain the covering radius with respect to Euclidean weight and Lee weight for block repetition codes, simplex code of α-type and β-type in R.
For a linear [n, k, d] code C all errors of weight t ≤ (d − 1)/2 are uniquely correctable. However, there are errors of weight > t which are also uniquely correctable. The Newton radius of a code is defined to be the largest weight of a uniquely correctable error. In this work Newton radii of all binary cyclic codes of lengths up to 31 and ternary cyclic and negacyclic codes of lengths up to 22 are determined.
arXiv (Cornell University), 2022
We present a generalization of Walsh-Hadamard transform that is suitable for applications in Coding Theory, especially for computation of the weight distribution and the covering radius of a linear code over a finite field. The transform used in our research, is a modification of Vilenkin-Chrestenson transform. Instead of using all the vectors in the considered space, we take a maximal set of nonproportional vectors, which reduces the computational complexity.
Open Journal of Discrete Applied Mathematics, 2019
In this paper, the covering radius of codes over R = Z 2 R * , where R * = Z 2 + vZ 2 , v 2 = v with different weight are discussed. The block repetition codes over R is defined and the covering radius for block repetition codes, simplex code of α-type and β-type in R are obtained.
IEEE Transactions on Information Theory, 2005
The monotone structure of correctable and uncorrectable errors given by the complete decoding for a binary linear code is investigated. New bounds on the error-correction capability of linear codes beyond half the minimum distance are presented, both for the best codes and for arbitrary codes under some restrictions on their parameters. It is proved that some known codes of low rate are as good as the best codes in an asymptotic sense.
Advances in Mathematics of Communications, 2010
In this work a heuristic algorithm for obtaining lower bounds on the covering radius of a linear code is developed. Using this algorithm the least covering radii of the binary linear codes of dimension 6 are determined. Upper bounds for the least covering radii of binary linear codes of dimensions 8 and 9 are derived.
2012 Information Theory and Applications Workshop, 2012
It is shown that, for all prime powers q and all k ≥ 3, if n ≥ (k − 1)q k − 2 q k −q q−1 , then there exists an [n, k; q] code that is proper for error detection.
IEEE Transactions on Information Theory, 1973
This paper presents a table of upper and lower bounds on rl,,,(rr,k), the maximum minimum distance over all binary, linear (n,k) error-correcting codes. The table is obtained by combining the best of the existing bounds on d,,,(n,k) with the mini&n distances of known codes and a variety of code-construction techniques.
SIAM Journal on Algebraic Discrete Methods, 1987
In this two-part paper we introduce the notion of a stable code and give a new upper bound on the normalized covering radius of a code. The main results are that, for fixed k and large n, the minimal covering radius t[n, k] is realized by a normal code in which all but one of the columns have multiplicity l; hence t[n / 2, k] t[n, k] + for sufficiently large n. We also show that codes with n _-< 14, k _-< 5 or drain 5 are normal, and we determine the covering radius of all proper codes of dimension k _-< 5. Examples of abnormal nonlinear codes are given. In Part [this Journal, 8 (1987), pp. 604-618] we investigated the general theory of normalized covering radius, while in Part II we study codes of dimension k _-< 5, and normal and abnormal codes.
European Journal of Combinatorics, 1996
We derive new upper bounds on the covering radius of a binary linear code as a function of its dual distance and dual-distance width. These bounds improve on the Delorme-Sole ´-Stokes bounds , and in a certain interval for binary linear codes they are also better than Tieta ¨ va ¨ inen's bound .
Discrete Applied Mathematics, 1998
We present a uniform approach towards deriving upper bounds on the covering radius of a code as a function of its dual distance structure and its cardinality. We show that the bounds obtained previously by Delsarte, Helleseth et al.. TietGiinen, resp. Solt-and Stokes follow as special cases. Moreover, we obtain an asymptotic improvement of these bounds using Chebyshe\ polynomials. ZZ: . 0166-2lXX,'98!Sl9.00 0 1998 Elsevier Science B.V. All rights reserved PII s0166-218x(97)00134-0
IEEE Transactions on Information Theory, 1992
Error control codes are widely used to increase the reliability of transmission of information over various forms of communications channels. The Hamming weight of a codeword is the number of nonzero entries in the word; the weights of the words in a linear code determine the error-correcting capacity of the code. The r th generalized Hamming weight for a linear code C, denoted by d r (C), is the minimum of the support sizes for r-dimensional subcodes of C. For instance, d 1 (C) equals the traditional minimum Hamming weight of C. In 1991, Feng, Tzeng, and Wei proved that the second generalized Hamming weight d 2 (C) = 8 for all double-error correcting BCH(2 m , 5) codes. We study d 3 (C) and higher Hamming weights for BCH(2 m , 5) codes by a close examination of the words of weight 5. end:
2013
Decoding problem Hamming distance Error correction Shannon Coding theory is fun (to certain extent :) Can we live without error correction codes ? -Probably not !! What would you miss : You would not be able to listen CD-s, retrieve correct data from your hard disk, would not be able to have a quality communication over telephone etc. Communication, storage errors, authenticity of ISBN numbers and much more is protected by means of error-correcting codes. 1 / 46 Mariners Course description Decoding problem Hamming distance Error correction Shannon Course description Decoding problem Hamming distance Error correction Shannon Coding theory -repetition code • Most of the storage media is prone to errors (CDs, DVDs, magnetic tapes). • In certain applications errors in retrieved data are not acceptable. • Need some redundancy of information, i.e. instead of saving 1 and 0 we can save 000 and 111. • Example of a simple repetition code • How do we retrieve the information -simply if no error 000 → 0 and 111 → 1. • If only one error then majority rules, 000, 001, 010, 100 → 0 111, 101, 110, 011 → 1 3 / 46 Mariners Course description Decoding problem Hamming distance Error correction Shannon Coding theory -repetition code II • What about correcting 2 errors ? Nothing we can do with this code, e.g. 000 → 110 and we decode 0 as 1 ! Course description Decoding problem Hamming distance Error correction Shannon • For instance Hamming code takes a block of k = 4 bits and encode it into a block of n = 7 bits; still can correct 1 error ! Comparison: • Repetition code: 1 bit encoded as 3 bits • Hamming code: 4 bits encoded as 7 bits • We may talk about coding efficiency (code rate) -clearly the Hamming code is better; using less redundancy for the same error correction capability. • We may wish to correct more than a few errors in a codeword -other codes such as Reed-Muller code exist. Mariner story • Back in 1969 Mariners (Voyagers etc.) were supposed to send pictures from Mars to Earth • The problem was a thermal noise to send pixels with grey scale of 64 level. • Redundancy was introduced -6 bits (64 scale grades) encoded as a 32-bit tuple. 6 / 46 Mariners Course description Decoding problem Hamming distance Error correction Shannon Mariner story -encoding • Such an encoding could correct up to 7 errors in transmission. • Correcting errors is not for freewe have to send bits 32/6 times faster. • This means that the total energy per bit is reduced -this causes increased probability of (bit) error ! • Have we overpaid the capability of correcting errors ? • The answer lies in computing coding gain -if positive then we save energy (reduce the probability of error).
2016
Paper describes the qualitative properties of linear codes in the small dimension and codimension, especially the same like oryginal Hamming code. The codes are constructed by algorithms based on „Monte Carlo” method. We considered four dimensions codes embedded in seven dimensional space over the field with twenty five elements. An analysis of the Hamming distance for such a codes is also represented in the paper.
There is a known best possible upper bound on the probability of undetected error for linear codes. The $[n,k;q]$ codes with probability of undetected error meeting the bound have support of size $k$ only. In this note, linear codes of full support ($=n$) are studied. A best possible upper bound on the probability of undetected error for such codes is given, and the codes with probability of undetected error meeting this bound are characterized.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.