Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, IEEE Journal on Selected Areas in Communications
His research interests include design and analysis of coding systems, graphbased iterative algorithms, and Bayesian methods applied to decoding, detection, and estimation in communication systems.
IEEE Transactions on Information Theory, 2000
The decoding error probability of codes is studied as a function of their block length. It is shown that the existence of codes with a polynomially small decoding error probability implies the existence of codes with an exponentially small decoding error probability. Specifically, it is assumed that there exists a family of codes of length N and rate R = (1 0 ")C (C is a capacity of a binary-symmetric channel), whose decoding probability decreases inverse polynomially in N . It is shown that if the decoding probability decreases sufficiently fast, but still only inverse polynomially fast in N , then there exists another such family of codes whose decoding error probability decreases exponentially fast in N . Moreover, if the decoding time complexity of the assumed family of codes is polynomial in N and 1=", then the decoding time complexity of the presented family is linear in N and polynomial in 1=". These codes are compared to the recently presented codes of Barg and Zémor, "Error Exponents of Expander Codes," IEEE Transactions on Information Theory, 2002, and "Concatenated Codes: Serial and Parallel," IEEE Transactions on Information Theory, 2005. It is shown that the latter families cannot be tuned to have exponentially decaying (in N ) error probability, and at the same time to have decoding time complexity linear in N and polynomial in 1=". Index Terms-Concatenated codes, decoding complexity, decoding error probability, error exponent, expander codes, irregular repeat accumulative (IRA) codes, iterative decoding, linear-time decoding, low-density paritycheck (LDPC) codes.
Journal of Communications and Networks, 2015
Since the invention of turbo codes in 1993 there has been an enormous interest and progress in the field of capacity approaching code constructions. Many classical constructions have been replaced by newer, better performing codes with feasible decoding complexity. Most of these modern code constructions, such as turbo codes, Gallager's low-density parity-check (LDPC) codes and their generalizations, can be modeled by sparse graphical
2007 IEEE International Conference on Communications, 2007
We investigate sink decoding methods and performance analysis approaches for a network with intermediate node encoding (coded network). The network consists of statistically independent noisy channels. The sink bit error probability (BEP) is the performance measure. We first discuss soft-decision decoding without statistical information on the upstream channels (the channels not directly connected to the sink). The example shows that the decoder cannot significantly improve the BEP from the hard-decision decoder. We develop the union bound to analyze the decoding approach. The bound can show the asymptotic (regarding SNR: signal-to-noise ratio) performance. Using statistical information of the upstream channels, we then show the method of maximum-likelihood (ML) decoding. With the decoder, a significant improvement in the BEP is obtained. To evaluate the union bound for the ML decoder, we use an equivalent signal point procedure. It can be reduced to a leastsquares problem with linear constraints for medium-to-high SNR.
IEEE Transactions on Information Theory, 2011
We propose an approximation of maximum-likelihood detection in ISI channels based on linear programming or message passing. We convert the detection problem into a binary decoding problem, which can be easily combined with LDPC decoding. We show that, for a certain class of channels and in the absence of coding, the proposed technique provides the exact ML solution without an exponential complexity in the size of channel memory, while for some other channels, this method has a nondiminishing probability of failure as SNR increases. Some analysis is provided for the error events of the proposed technique under linear programming. I. INTRODUCTION Intersymbol interference (ISI) is a characteristic of many data communications and storage channels. Systems operating on these channels employ error-correcting codes in conjunction with some ISI reduction technique, which, in magnetic recording systems, is often a conventional Viterbi detector. It is known that some gain will be obtained if the equalization and decoding blocks are combined at the receiver by exchanging soft information between them. A possible approach to achieving this gain is to use softoutput equalization methods such as the BCJR algorithm [1] or the soft-output Viterbi algorithm (SOVA) [2] along with iterative decoders. However, both BCJR and SOVA suffer from exponential complexity in the length of the channel memory. Kurkoski et al. [3] proposed two graph representations of the ISI channel that can be combined with the Tanner graph of the LDPC code for message-passing decoding. Their bit-based representation of the channel contains many 4-cycles, which results in a significant performance degradation compared to maximum-likelihood (ML) detection. On the other hand, message passing (MP) on their state-based representation, where messages contain state rather than bit information, has a performance and overall
IEEE Transactions on Information Theory, 1970
IEEE Transactions on Information Theory, 1991
A generalization of Sullivan inequality on the ratio of the probability of a linear code to that of any of its cosets is proved. Starting from this inequality, a sufficient condition for successful decoding of linear codes by a probabilistic method is derived. A probabilistic decoding algorithm for "low-density parity-check codes" is also analyzed. The results obtained allow one to estimate experimentally the probability of successful decoding using these probabilistic algorithms.
Information Sciences, 1991
The field of error correcting codes has developed rapidly over the past forty years. During the past decade in particular two very significant developments have occurred and these are briefly reviewed here. From this basis, suggestions are made as to where coding principles might find application in the future.
2008 24th Biennial Symposium on Communications, 2008
This paper is motivated by the problem of error control in network coding when errors are introduced in a random fashion (rather than chosen by an adversary). An additive-multiplicative matrix channel is considered as a model for random network coding. The model assumes that n packets of length m are transmitted over the network, and up to t erroneous packets are randomly chosen and injected into the network. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of large field or matrix size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has decoding complexity O(n 2 m) and a probability of error that decreases exponentially both in the packet length and in the field size in bits. Extensions of these results for coherent network coding are also presented.
2009
We study iterative multiuser detection in large randomly spread code division multiple access systems under the assumption that the number of users accessing the channel is unknown by the receiver. In particular, we focus on the factor graph representation and iterative algorithms based on belief propagation. We build an iterative scheme that detects the encoded data and the users' activity. By using the replica method from statistical physics, we characterize the density evolution of the iterative detector when the number of potential users is large. As a result, we provide a general fixed-point equation where the nature of the exchanging probabilities depends on to the users' activity. Finally, we show that the structure of the users' codes yields a multiuser efficiency fixed-point equation that is equivalent to the case of all-active users with a system load scaled by the activity rate.
Problems of Information Transmission, 2010
We consider the decoding for Silva-Kschischang-Kötter random network codes based on Gabidulin's rank-metric codes. The model of a random network coding channel can be reduced to transmitting matrices of a rank code through a channel introducing three types of additive errors. The first type is called random rank errors. To describe other types, the notions of generalized row erasures and generalized column erasures are introduced. An algorithm for simultaneous correction of rank errors and generalized erasures is presented. An example is given.
Classical, Semi-classical and Quantum Noise, 2011
This paper is a tutorial on the application of graph theoretic techniques in classical coding theory. A fundamental problem in coding theory is to determine the maximum size of a code satisfying a given minimum Hamming distance. This problem is thought to be extremely hard and still not completely solved. In addition to a number of closed form expressions for special cases and some numerical results, several relevant bounds have been derived over the years.
2011
In this paper, we prove the existence of capacity achieving linear codes with random binary sparse generating matrices. The results on the existence of capacity achieving linear codes in the literature are limited to the random binary codes with equal probability generating matrix elements and sparse parity-check matrices. Moreover, the codes with sparse generating matrices reported in the literature are not proved to be capacity achieving.
Advances in Network Information Theory: Dimacs Workshop …, 2004
DIMACS Series in Discrete Mathematics and Theoretical Computer Science Volume 66, 2004 Linear Network Codes: A Unified Framework for Source, Channel, and Network Coding Michelle Effros, Muriel Medard, Tracey Ho, Siddharth Ray, ...
2012 IEEE International Symposium on Information Theory Proceedings, 2012
A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., nonbinary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we analyze a class of spatially-coupled generalized LDPC codes and observe that, in the high-rate regime, they can approach capacity under iterative hard-decision decoding. These codes can be seen as generalized product codes and are closely related to braided block codes.
2010
Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. Optimal encoding schemes and efficient heuristics were studied in various settings, while also leading to new results for Network Coding such as improved gaps between linear and non-linear capacity as well as hardness of approximation. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate β, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on β were derived from the study of other Index Coding capacities (e.g. the scalar capacity β 1 ) by Bar-Yossef et al (2006), Lubetzky and Stav (2007) and Alon et al (2008). However, these indirect bounds shed little light on the behavior of β: there was no known polynomial-time algorithm for approximating β in a general network to within a nontrivial (i.e. o(n)) factor, and the exact value of β remained unknown for any graph where Index Coding is nontrivial.
2014 IEEE Information Theory Workshop (ITW 2014), 2014
A low-density parity-check (LDPC) code is a linear block code described by a sparse parity-check matrix, which can be efficiently represented by a bipartite Tanner graph. The standard iterative decoding algorithm, known as belief propagation, passes messages along the edges of this Tanner graph. Density evolution is an efficient method to analyze the performance of the belief propagation decoding algorithm for a particular LDPC code ensemble, enabling the determination of a decoding threshold. The basic problem addressed in this work is how to optimize the Tanner graph so that the decoding threshold is as large as possible. We introduce a new code optimization technique which involves the search space range which can be thought of as minimizing randomness in differential evolution or limiting the search range in exhaustive search. This technique is applied to the design of good irregular LDPC codes and multiedge type LDPC codes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.