Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, Computing Research Repository
In this paper the existence of capacity achieving linear codes with arbitrarily sparse generator matrices is proved. In particular, we show the existence of capacity achieving codes for which the density of ones in the generator matrix is arbitrarily low. The existing results on the existence of capacity achieving linear codes in the literature are limited to the codes whose
2012 IEEE International Symposium on Information Theory Proceedings, 2012
In this paper the existence of capacity achieving linear codes with arbitrarily sparse generator matrices is proved. In particular, we show the existence of capacity achieving codes for which the density of ones in the generator matrix is arbitrarily low. The existing results on the existence of capacity achieving linear codes in the literature are limited to the codes whose generator matrix elements are zero or one with necessarily equal probability, yielding a non-sparse generator matrix. This will imply a high encoding complexity. An interesting trade-off between the sparsity of the generator matrix and the value of the error exponent is also demonstrated. Compared to the existing results in the literature, which are limited to codes with nonsparse generator matrices, the proposed approach is novel and more concise. Although the focus in this paper is on the Binary Symmetric and Binary Erasure Channels, the results can be easily extended to other discrete memoryless symmetric channels.
Information Theory, 2009. ISIT 2009. IEEE …, 2009
We establish a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels. The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science. Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis. Using known explicit constructions of condensers, we obtain specific ensembles whose size is as small as polynomial in the block length. By applying our construction to Justesen's concatenation scheme we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time) decoding and exponentially small error probability.
2009
We establish a general framework for construction of small ensembles of capacity achieving linear codes for a wide range of (not necessarily memoryless) discrete symmetric channels, and in particular, the binary erasure and symmetric channels. The main tool used in our constructions is the notion of randomness extractors and lossless condensers that are regarded as central tools in theoretical computer science. Same as random codes, the resulting ensembles preserve their capacity achieving properties under any change of basis. Using known explicit constructions of condensers, we obtain specific ensembles whose size is as small as polynomial in the block length. By applying our construction to Justesen's concatenation scheme (Justesen, 1972) we obtain explicit capacity achieving codes for BEC (resp., BSC) with almost linear time encoding and almost linear time (resp., quadratic time) decoding and exponentially small error probability.
2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel, 2012
In 1973, Gallager proved that the random-coding bound is exponentially tight for the random code ensemble at all rates, even below expurgation. This result explained that the random-coding exponent does not achieve the expurgation exponent due to the properties of the random ensemble, irrespective of the utilized bounding technique. It has been conjectured that this same behavior holds true for a random ensemble of linear codes. This conjecture is proved in this paper. Additionally, it is shown that this property extends to Poltyrev's random-coding exponent for a random ensemble of lattices.
arXiv (Cornell University), 2016
Random linear network coding (RLNC) in theory achieves the max-flow capacity of multicast networks, at the cost of high decoding complexity. To improve the performance-complexity tradeoff, we consider the design of sparse network codes. A generation-based strategy is employed in which source packets are grouped into overlapping subsets called generations. RLNC is performed only amongst packets belonging to the same generation throughout the network so that sparseness can be maintained. In this paper, generation-based network codes with low reception overheads and decoding costs are designed for transmitting of the order of 10 2-10 3 source packets. A low-complexity overhead-optimized decoder is proposed that exploits "overlaps" between generations. The sparseness of the codes is exploited through local processing and multiple rounds of pivoting of the decoding matrix. To demonstrate the efficacy of our approach, codes comprising a binary precode, random overlapping generations, and binary RLNC are designed. The results show that our designs can achieve negligible code overheads at low decoding costs, and outperform existing network codes that use the generation based strategy.
2016
In 1973, Gallager proved that the random-coding bound is exponentially tight for the random code ensemble at all rates, even below expurgation. This result explained that the random-coding exponent does not achieve the expurgation exponent due to the properties of the random ensemble, irrespective of the utilized bounding technique. It has been conjectured that this same behavior holds true for a random ensemble of linear codes. This conjecture is proved in this paper. Additionally, it is shown that this property extends to Poltyrev's random-coding exponent for a random ensemble of lattices.
IEEE Transactions on Information Theory, 2016
In 1973, Gallager proved that the random-coding bound is exponentially tight for the random code ensemble at all rates, even below expurgation. This result explained that the random-coding exponent does not achieve the expurgation exponent due to the properties of the random ensemble, irrespective of the utilized bounding technique. It has been conjectured that this same behavior holds true for a random ensemble of linear codes. This conjecture is proved in this paper. Additionally, it is shown that this property extends to Poltyrev's random-coding exponent for a random ensemble of lattices.
IEEE Access, 2017
While random linear network coding is known to improve network reliability and throughput, its high costs for delivering coding coefficients and decoding represent an obstacle where nodes have limited power to transmit and decode packets. In this paper, we propose sparse network codes for scenarios where low coding vector weights and low decoding cost are crucial. We consider generation-based network codes where source packets are grouped into overlapping subsets called generations, and coding is performed only on packets within the same generation in order to achieve sparseness and low complexity. A sparse code is proposed that is comprised of a precode and random overlapping generations. The code is shown to be much sparser than existing codes that enjoy similar code overhead. To efficiently decode the proposed code, a novel low-complexity overhead-optimized decoder is proposed where code sparsity is exploited through local processing and multiple rounds of pivoting. Through extensive simulation comparison with existing schemes, we show that short transmissions of the order of 10 2 − 10 3 source packets, a denomination convenient for many applications of interest, can be efficiently decoded by the proposed decoder. INDEX TERMS Network coding, sparse codes, random codes, generations, code overhead, efficient decoding.
Advances in Mathematics of Communications, 2019
In this article we present a class of codes with few weights arising from special type of linear sets. We explicitly show the weights of such codes, their weight enumerator and possible choices for their generator matrices. In particular, our construction yields also to linear codes with three weights and, in some cases, to almost MDS codes. The interest for these codes relies on their applications to authentication codes and secret schemes, and their connections with further objects such as association schemes and graphs.
Sparse random linear network coding (SRLNC) is an attractive technique proposed in the literature to reduce the decoding complexity of random linear network coding. Recognizing the fact that the existing SRLNC schemes are not efficient in terms of the required reception overhead, we consider the problem of designing overhead-optimized SRLNC schemes. To this end, we introduce a new design of SRLNC scheme that enjoys very small reception overhead while maintaining the main benefit of SRLNC, i.e., its linear encoding/decoding complexity. We also provide a mathematical framework for the asymptotic analysis and design of this class of codes based on density evolution (DE) equations. To the best of our knowledge, this work introduces the first DE analysis in the context of network coding.
IEEE Transactions on Information Theory, 2017
Shannon in his 1956 seminal paper introduced the concept of the zero error capacity, C0, of a noisy channel. This is defined as the least upper bound of rates at which it is possible to transmit information with zero probability of error. At present not many codes are known to achieve the zero error capacity. In this paper, some codes which achieve zero error capacities in limited magnitude error channels are described. The code lengths of these zero error capacity achieving codes can be of any finite length n = 1, 2,. . ., in contrast to the long lengths required for the known regular capacity achieving codes such as turbo codes, LDPC codes and polar codes. Both wrap around and non-wrap around limited magnitude error models are considered in this paper. For non-wrap around error model, the exact value of zero error capacities are derived, and optimal non-systematic and systematic codes are designed. The non-systematic codes achieve the zero error capacity with any finite length. The optimal systematic codes achieve the systematic zero error capacity of the channel, which is defined as the zero error capacity with the additional requirements that the communication must be carried out with a systematic code. It is also shown that the rates of the proposed systematic codes are equal to or approximately equal to the zero error capacity of the channel. For the wrap around model bounds are derived for the zero error capacity and in many cases the bounds give the exact value. In addition, optimal wrap around non-systematic and systematic codes are developed which either achieve or are close to achieving the zero error capacity with finite length.
1997
Abstract Combining linear programming approach with the Plotkin-Johnson argument for constant weight codes, we derive upper bounds on the size of codes of length n and minimum distance d=(n 0 j)= 2, 0< j< n1= 3. For j= o (n1= 3) these bounds practically coincide (are slightly better) with the Tiet av ainen bound. For xed j and j proportional to n1= 3, j< n1= 3 0 (2= 9) lnn, it improves on the earlier known results. Keywords: Upper bounds, Plotkin bound, Tiet av ainen bound, McEliece bound.
IEEE Journal on Selected Areas in Communications, 2016
His research interests include design and analysis of coding systems, graphbased iterative algorithms, and Bayesian methods applied to decoding, detection, and estimation in communication systems.
ArXiv, 2020
Constructions of locally decodable codes (LDCs) have one of two undesirable properties: low rate or high locality (polynomial in the length of the message). In settings where the encoder/decoder have already exchanged cryptographic keys and the channel is a probabilistic polynomial time (PPT) algorithm, it is possible to circumvent these barriers and design LDCs with constant rate and small locality. However, the assumption that the encoder/decoder have exchanged cryptographic keys is often prohibitive. We thus consider the problem of designing explicit and efficient LDCs in settings where the channel is slightly more constrained than the encoder/decoder with respect to some resource e.g., space or (sequential) time. Given an explicit function $f$ that the channel cannot compute, we show how the encoder can transmit a random secret key to the local decoder using $f(\cdot)$ and a random oracle $H(\cdot)$. This allows bootstrap from the private key LDC construction of Ostrovsky, Pande...
IEEE Transactions on Information Theory, 2008
We present capacity achieving multilevel run-length-limited (ML-RLL) codes that can be decoded by a sliding window of size 2.
Designs, Codes and Cryptography, 2007
We determine the minimum length n q (k, d) for some linear codes with k ≥ 5 and q ≥ 3. We prove that n q (k, d) = g q (k, d) + 1 for q k−1 − 2q k−1 2 −q + 1 ≤ d ≤ q k−1 − 2q k−1 2 when k is odd, for q k−1 − q k 2 − q k 2 −1 − q + 1 ≤ d ≤ q k−1 − q k 2 − q k 2 −1 when k is even, and for 2q k−1 − 2q k−2 − q 2 − q + 1 ≤ d ≤ 2q k−1 − 2q k−2 − q 2 .
2003
For a code C= C (n, M) the level k code of C, denoted C k, is the set of all vectors resulting from a linear combination of precisely k distinct codewords of C. We prove that if k is any positive integer divisible by 8, and n=? k, M= ß k= 2 k then there is a codeword in C k whose weight is either 0 or at most. In particular, if?<(4ß-2) 2/48 then there is a codeword in C k whose weight is n/2-T (n).
IEEE Transactions on Information Theory, 2015
The set of all subspaces of F n q is denoted by Pq(n). The subspace distance dS(X, Y ) = dim(X) + dim(Y ) -2 dim(X ∩ Y ) defined on Pq(n) turns it into a natural coding space for error correction in random network coding. A subset of Pq(n) is called a code and the subspaces that belong to the code are called codewords. Motivated by classical coding theory, a linear coding structure can be imposed on a subset of Pq(n). Braun, Etzion and Vardy conjectured that the largest cardinality of a linear code, that contains F n q , is 2 n . In this paper, we prove this conjecture and characterize the maximal linear codes that contain F n q .
IEEE Transactions on Information Theory, 2005
The weight spectrum of sequences of binary linear codes that achieve arbitrarily small word error probability on a class of noisy channels at a nonzero rate is studied. We refer to such sequences as good codes. The class of good codes includes turbo, low-density parity-check, and repeat-accumulate codes. We show that a sequence of codes is good when transmitted over a memoryless binary-symmetric channel (BSC) or an additive white Gaussian noise (AWGN) channel if and only if the slope of its spectrum is finite everywhere and its minimum Hamming distance goes to infinity with no requirement on its rate growth. The extension of these results to code ensembles in probabilistic terms follows in a direct manner. We also show that the sufficient condition holds for any binary-input memoryless channel.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.