Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Philips Journal of Research
The systematic design of De-constrained codes based on codewords of fixed length is considered. Simple recursion relations for enumerating the number of codewords satisfying a constraint on the maximum unbalance of ones and zeros in a codeword are derived. An enumerative scheme for encoding and decoding maximum unbalance constrained codewords with binary symbols is developed. Examples of constructions of transmission systems based on unbalance constrained codewords are given. A worked example of an 8b lOb channel code is given being of particular interest because of its practical simplicity and relative efficiency.
Philips Journal of Research
In digital transmission it is sometimes desirable for the channel stream to have low power near zero frequency. Suppression of the low-frequency components is achieved by constraining the unbalance of the transmitted positive and negative pulses. Rate and spectral properties of unbalance constrained codes with binary symbols based on simple bi-mode coding schemes are calculated.
Doctor Thesis, Eindhoven University on Technology ( …
Journal of Communications
In this paper, the construction of binary balanced codes is revisited. Binary balanced codes refer to sets of bipolar codewords where the number of "1"s in each codeword equals that of "0"s. The first algorithm for balancing codes was proposed by Knuth in 1986; however, its redundancy is almost two times larger than that of the full set of balanced codewords. We will present an efficient and simple construction with a redundancy approaching the minimal achievable one
IEEE Transactions on Information Theory, 1989
2009 IEEE International Symposium on Information Theory, 2009
The prior art construction of sets of balanced codewords by Knuth is attractive for its simplicity and absence of look-up tables, but the redundancy of the balanced codes generated by Knuth's algorithm falls a factor of two short with respect to capacity. We present a new construction, which is simple, does not use look-up tables, and is less redundant than Knuth's construction. In the new construction, the user word is modified in the same way as in Knuth's construction, that is by inverting a segment of user symbols. The prefix that indicates which segment has been inverted, however, is encoded and decoded in a different, more efficient, way.
IEEE Journal on Selected Areas in Communications, 2000
The prior art construction of sets of balanced codewords by Knuth is attractive for its simplicity and absence of look-up tables, but the redundancy of the balanced codes generated by Knuth's algorithm falls a factor of two short with respect to the minimum required. We present a new construction, which is simple, does not use look-up tables, and is less redundant than Knuth's construction. In the new construction, the user word is modified in the same way as in Knuth's construction, that is by inverting a segment of user symbols. The prefix that indicates which segment has been inverted, however, is encoded in a different, more efficient, way.
Journal of Discrete Mathematical Sciences and Cryptography, 2006
A symbol permutation invariant balanced (SPI-balanced) code over the alphabet Z m = {0, 1,. .. , m − 1} is a block code over Z m such that each alphabet symbol occurs as many times as any other symbol in every codeword. For this reason every permutation among the symbols of the alphabet changes a SPI-balanced code into a SPI-balanced code. This means that SPI-balanced words are "the most balanced" among all possible m-ary balanced word types, and this property makes them very attractive from the application perspective. In particular, they can be used to achieve m-ary DC-free communication, to detect/correct asymmetric/unidirectional errors on the m-ary asymmetric/unidirectional channel, to achieve delay-insensitive communication, to maintain data integrity in digital optical disks, and so on. The paper gives some efficient methods to convert (encode) mary information sequences into m-ary SPI-balanced codes whose redundancy is equal to roughly double the minimum possible redundancy r min [(m − 1)/2] log m n − (1/2)[1 − (1/ log 2π m)]m − (1/ log 2π m) for SPI-balanced code with k information digits and length n = k + r. For example, the first method given in the paper encodes k information digits into a SPI-balanced code of length n = k + r, with r = (m − 1) log m k + O(m log m log m k). A second method is a recursive method, which uses the first as base code, and encodes k digits into a SPI-balanced code of length n = k + r, with r (m − 1) log m n log m [(m − 1)!].
IEEE Transactions on Information Theory, 2003
We design multilevel coding (MLC) and bit-interleaved coded modulation (BICM) schemes based on low-density parity-check (LDPC) codes. The analysis and optimization of the LDPC component codes for the MLC and BICM schemes are complicated because, in general, the equivalent binary-input component channels are not necessarily symmetric. To overcome this obstacle, we deploy two different approaches: one based on independent and identically distributed (i.i.d.) channel adapters and the other based on coset codes. By incorporating i.i.d. channel adapters, we can force the symmetry of each binary-input component channel. By considering coset codes, we extend the concentration theorem based on previous work by Richardson et al. and Kavčić et al. We also discuss the relation between the systems based on the two approaches and show that they indeed have the same expected decoder behavior. Next, we jointly optimize the code rates and degree distribution pairs of the LDPC component codes for the MLC scheme. The optimized irregular LDPC codes at each level of MLC with multistage decoding (MSD) are able to perform well at signal-to-noise ratios (SNR) very close to the capacity of the additive white Gaussian noise (AWGN) channel. We also show that the optimized BICM scheme can approach the parallel independent decoding (PID) capacity as closely as does the MLC/PID scheme. Simulations with very large codeword length verify the accuracy of the analytical results. Finally, we compare the simulated performance of these coded modulation schemes at finite codeword lengths, and consider the results from the perspective of a random coding exponent analysis. Index Terms-Bit-interleaved coded modulation (BICM), coding exponent analysis, coset codes, density evolution, independent and identically distributed (i.i.d.) channel adapters, irregular low-density parity-check (LDPC) codes, LDPC codes, multilevel coding (MLC). I. INTRODUCTION M ULTILEVEL coding (MLC) [3], [4] and bit-interleaved coded modulation (BICM) [5], [6] are two well-known coded modulation schemes proposed to achieve both power and Manuscript
IEEE Transactions on Information Theory, 2000
This correspondence presents two variable-rate encoding algorithms that achieve capacity for the ( ) constraint when = 2 +1, or when +1 is not prime. The first algorithm, symbol sliding, is a generalized version of the bit flipping algorithm introduced by Aviran et al. In addition to achieving capacity for ( 2 +1) constraints, it comes close to capacity in other cases. The second algorithm is based on interleaving and is a generalized version of the bit stuffing algorithm introduced by Bender and Wolf. This method uses fewer than biased bit streams to achieve capacity for ( ) constraints with + 1 not prime. In particular, the encoder for ( + 2 1) constraints 2 requires only biased bit streams.
IEEE GLOBECOM 2007-2007 IEEE Global Telecommunications Conference, 2007
We propose a deterministic method to design irregular Low-Density Parity-Check (LDPC) codes for binary erasure channels (BEC). Compared to the existing methods, which are based on the application of asymptomatic analysis tools such as density evolution or Extrinsic Information Transfer (EXIT) charts in an optimization process, the proposed method is much simpler and faster. Through a number of examples, we demonstrate that the codes designed by the proposed method perform very closely to the best codes designed by optimization. An important property of the proposed designs is the flexibility to select the number of constituent variable node degrees P. The proposed designs include existing deterministic designs as a special case with P = N-1, where N is the maximum variable node degree. Compared to the existing deterministic designs, for a given rate and a given δ > 0, the designed ensembles can have a threshold in δ-neighborhood of the capacity upper bound with smaller values of P and N. They can also achieve the capacity of the BEC as N, and correspondingly P and the maximum check node degree tend to infinity. Index Terms-channel coding, low-density parity-check (LDPC) codes, binary erasure channel (BEC), deterministic design. I. INTRDOUCTION Low-Density Parity-Check (LDPC) codes have received much attention in the past decade due to their attractive performance/complexity tradeoff on a variety of communication channels. In particular, on the Binary Erasure Channel (BEC), they achieve the channel capacity asymptotically [1-4]. In [1],[5],[6] a complete mathematical analysis for the performance of LDPC codes over the BEC, both asymptotically and for finite block lengths, has been developed. For other types of channels such as the Binary Symmetric Channel (BSC) and the Binary Input Additive White Gaussian Noise (BIAWGN) channel, only asymptotic analysis is available [7]. For irregular LDPC codes, the problem of finding ensemble
IEEE Transactions on Information Theory, 2019
A novel Knuth-like balancing method for runlengthlimited words is presented, which forms the basis of new variableand fixed-length balanced runlength-limited codes that improve on the code rate as compared to balanced runlength-limited codes based on Knuth's original balancing procedure developed by Immink et al. While Knuth's original balancing procedure, as incorporated by Immink et al., requires the inversion of each bit one at a time, our balancing procedure only inverts the runs as a whole one at a time. The advantage of this approach is that the number of possible inversion points, which needs to be encoded by a redundancy-contributing prefix/suffix, is reduced, thereby allowing a better code rate to be achieved. Furthermore, this balancing method also allows for runlength violating markers which improve, in a number of respects, on the optimal such markers based on Knuth's original balancing method.
—In this article, we study properties and algorithms for constructing sets of constant weight codewords with bipolar symbols, where the sum of the symbols is a constant q, q = 0. We show various code constructions that extend Knuth's balancing vector scheme, q = 0, to the case where q > 0. We compute the redundancy of the new coding methods. Finally, we generalize the proposed methods to encoding of imbalanced arrays in two or more dimensions.
SN Computer Science, 2021
This paper presents the research results of mixed base number systems using binomial representation of numbers. It also shows the investigated non-linear coding techniques with full constant-weight codes, which are based on the binomial numeration. These codes can be used in a variety of computer applications: in code-based cryptosystems; to detect errors in asymmetric communication channels, etc. We propose a new numeral system. It combines the features of positional and binomial numeration. We suggest a technique of full non-binary constant-weight coding based on a generalized binomialpositional representation. It allows us to generalize the known approach to the non-binary case and practically implement computational algorithms for generating full set of non-binary constant-weight sequences. There are the analytical relations that relate the positional and binomial representations of numbers. The article provides several examples, which clear up the usefulness and the constructiveness of the proposed approach, and simplify the perception and understanding of results. We also consider some aspects of the potential use of the proposed numeral system. In particular, the article discusses asymmetric communication channels (the model of a binary Z-channel and its generalization to the non-binary model), and shows the advantage of constant-weight codes for error detection.
IEEE Journal on Selected Areas in Communications, 2000
A general and systematic code design methodology is proposed to efficiently combine constrained codes with PC codes for data storage channels. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code.
IEEE Transactions on Information Theory, 1990
A (2n, k , I, c , d) dc free binary block code is a code of length 2n, constant weight n , 2/c. codewords, maximum runlength of a symbol I , maximum accumulated charge c, and minimum distance d. The requirements ar~that k and d will be large and I and c small. We present a (16,9,6,3,4) dc free block c6de, and a (30,20,10,6,4) dc free block code. Easy encoding and decoding procedures for these codes are given. Given a code C 1 of length n , even weight, and distance 4, we can obtain a (4n', k, I , c , 4) dc free block code C 2' where I is 4, 5 or 6, c is not greater than n +1 (but usually sIgnificantly smaller). If C 1 is easily constructed then C 2 has easy encodiIl:g and decoding procedures.
IEEE Transactions on Communications, 2000
We present an enumerative technique for encoding and decoding dc-free runlength-limited sequences. This technique enables the encoding and decoding of sequences approaching the maxentropic performance bounds very closely in terms of code rate and low-frequency suppression capability. Use of finite-precision floating-point notation to express the weight coefficients results in channel encoders and decoders of moderate complexity. For channel constraints of practical interest, the hardware required for implementing such a quasi-maxentropic coding scheme consists mainly of a ROM of at most 5 kB. where he is involved in research and development activities regarding channel coding and multiple-antenna techniques for applications in second-and third-generation mobile radio communications systems. received the M.S. and Ph.D. degrees from the
IEEE Transactions on Information Theory, 2019
It is known that, for large user word lengths, auxiliary data can be used to recover most of the redundancy loss of Knuth's simple balancing method as compared to the optimal redundancy of balanced codes for the binary case. Here, this important result is extended in a number of ways. Firstly, an upper bound for the amount of auxiliary data is derived that is valid for all codeword lengths. This result is primarily of theoretical interest, as it defines the probability distribution of the number of balancing indices that results in optimal redundancy. This result is equally valid for particular nonbinary generalizations of Knuth's balancing method. Secondly, an asymptotically exact expression for the amount of auxiliary data for the ternary case of a variable length realization of the modified balanced code construction by Pelusi et al. is derived, that, in all respects, is the analogue of the result obtained for the binary case. The derivation is based on a generalization of the binary random walk to the ternary case and a simple modification of an existing generalization of Knuth's method for non-binary balanced codes. Finally, a conjecture is proposed regarding the probability distribution of the number of balancing indices for any alphabet size.
GLOBECOM'01. IEEE Global Telecommunications Conference (Cat. No.01CH37270), 2001
A general method of constructing (d, k) constrained codes from arbitrary sequences is introduced. This method is then used for constructing a class of weakly constrained codes. The proposed codes are analyzed for the case of d = 0 and shown to give results which are better or comparable to those of the best available codes, however at the cost of failure with some very low probability.
IEEE Transactions on Information Theory, 2000
ooooO1 Fig. 10. Runlength constraint graph. 0.2 0.4 0.6 0.8 Normalized Frequency (channel bits) Fellow, IEEE Fig. 11. Continuous time power spectra of maxentropic (1,7) con-Abstract-Recently, Rnuth presented coding schemes in which each straint and new (1,7) code.
Journal of Communications
A simple scheme was proposed by Knuth to generate balanced codewords from a random binary information sequence. However, this method presents a redundancy which is twice as that of the full sets of balanced codewords, that is the minimal achievable redundancy. The gap between the Knuth's algorithm generated redundancy and the minimal one is significantly considerable and can be reduced. This paper attempts to achieve this goal through a method based on information sequence candidates. Index Terms-Balanced code, inversion point, redundancy, running digital sum (RDS), running digital sum from left (RDSL), running digital sum from right (RDSR), information sequence candidates.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.