Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1990, IEEE Transactions on Information Theory
…
19 pages
1 file
A (2n, k , I, c , d) dc free binary block code is a code of length 2n, constant weight n , 2/c. codewords, maximum runlength of a symbol I , maximum accumulated charge c, and minimum distance d. The requirements ar~that k and d will be large and I and c small. We present a (16,9,6,3,4) dc free block c6de, and a (30,20,10,6,4) dc free block code. Easy encoding and decoding procedures for these codes are given. Given a code C 1 of length n , even weight, and distance 4, we can obtain a (4n', k, I , c , 4) dc free block code C 2' where I is 4, 5 or 6, c is not greater than n +1 (but usually sIgnificantly smaller). If C 1 is easily constructed then C 2 has easy encodiIl:g and decoding procedures.
Certain properties of the parity-check matrix H of (n, k) linear codes are used to establish a computerised search procedure for new binary linear codes. Of the new error-correcting codes found by this procedure, two codes were capable of correcting up to two errors, three codes up to three errors, four codes up to four errors and one code up to five errors. Two meet the lower bound given by Helgert and Stinaff, and seven codes exceed it. In addition, one meets the upper bound. Of the even-Hamming-distance versions of these codes, eight meet the upper bound, and the remaining two exceed the lower bound.
IEEE Transactions on Information Theory, 1991
New combinatorial and algebraic techniques are presented for systematically constructing different (d,k) block codes capable of detecting and correcting single bit-errors, single-peak shift-errors, double adjacent-errors and multiple adjacent erasures. Constructions utilizing channel side information, such as the magnetic recording ternary channel output string, or erasures, do not impose any restriction on the /c-constraint, while some of the other constructions require k = 2d. Due to the small and fixed number of redundant bits, the rates of both classes of constructions can be made to approach the capacity of the d constrained channel for long codeword lengths. All the codes can be encoded and decoded with simple, structured logic circuits.
Physical Communication, 2019
The design of block codes for short information blocks (e.g., a thousand or less information bits) is an open research problem that is gaining relevance thanks to emerging applications in wireless communication networks. In this paper, we review some of the most promising code constructions targeting the short block regime, and we compare them with both finite-length performance bounds and classical error-correction coding schemes. The work addresses the use of both binary and high-order modulations over the additive white Gaussian noise channel. We will illustrate how to effectively approach the theoretical bounds with various performance versus decoding complexity tradeoffs.
Philips Journal of Research
The systematic design of De-constrained codes based on codewords of fixed length is considered. Simple recursion relations for enumerating the number of codewords satisfying a constraint on the maximum unbalance of ones and zeros in a codeword are derived. An enumerative scheme for encoding and decoding maximum unbalance constrained codewords with binary symbols is developed. Examples of constructions of transmission systems based on unbalance constrained codewords are given. A worked example of an 8b lOb channel code is given being of particular interest because of its practical simplicity and relative efficiency.
IEEE Signal Processing Magazine, 2004
hannel coding is an error-control technique used for providing robust data transmission through imperfect channels by adding redundancy to the data. There are two important classes of such coding methods: block and convolutional. For this tutorial, we focus on linear block codes because they provide much insight and allow for a simple visualization of the error detection/correction process. Forward error correction (FEC) is the name used when the receiving equipment does most of the work. In the case of block codes, the decoder looks for errors and, once detected, corrects them (according to the capability of the code). The technique has become an important signal-processing tool used in modern communication systems and in a wide variety of other digital applications such as high-density memory and recording media. Such coding provides system performance improvements at significantly lower cost than through the use of other methods that increase signal-to-noise ratio (SNR) such as increased power or antenna gain. In this article we first develop the ideas behind simple binary codes. We then treat cyclic and nonbinary An intuitive treatment of error detection and correction
Lecture Notes in Computer Science, 2000
Block codes are first viewed as finite state automata represented as trellises. A technique termed subtrellis overlaying is introduced with the object of reducing decoder complexity. Necessary and sufficient conditions for subtrellis overlaying are next derived from the representation of the block code as a group, partitioned into a subgroup and its cosets. Finally a view of the code as a graph permits a combination of two shortest path algorithms to facilitate efficient decoding on an overlayed trellis.
AIP Conference Proceedings, 2019
The codes generated using tensor product and called tensor codes have properties and composition similar to Linear Error Block codes (LEB codes). In this paper we study in depth the construction of new LEB codes using tensor product (TP). We also show that the TP code formed by two LEB codes is also an LEB code. We prove that the TP of two Hamming codes is not a Hamming code with minimum distance 3, besides, it's a non-perfect LEB code. We show that the TP code formed by two π-cyclic codes (resp. simplex LEB codes) is a π-cyclic code (resp. simplex LEB code).
2010
The linear, binary, block codes with no equally likely probabilities for the binary symbols are analyzed. The encoding graph for systematic linear block codes is proposed. These codes are seen as sources with memory and the information quantities H(S,X), H(S), H(X), H(X|S), H(S|X), I(S,X) are derived. On the base of these quantities, the code performances are analyzed.
International Journal of Advanced Computer Science and Applications, 2021
Because of their algebraic structure and simple hardware implementation, linear codes as class of errorcorrecting codes, are used in a multitude of situations such as Compact disk, backland bar code, satellite and wireless communication, storage systems, ISBN numbers and so more. Nevertheless, the design of linear codes with high minimum Hamming distance to a given dimension and length of the code, remains an open challenge in coding theory. In this work, we propose a code construction method for constructing good binary linear codes from popular ones, while using the Hadamard matrix. The proposed method takes advantage of the MacWilliams identity for computing the weight distribution, to overcome the problem of computing the minimum Hamming distance for larger dimensions. Keywords—Binary linear codes; code construction; minimum hamming distance; error-correcting codes; weight distribution; coding theory; hadamard matrix
Definition: A block code of length n and size M over an alphabet with q symbols is a set of M q-ary n-tuples called codewords.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Information Theory, 2000
2009 IEEE 70th Vehicular Technology Conference Fall, 2009
IEEE Transactions on Information Theory, 1987
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Communications, 2005
IEEE Transactions on Information Theory, 1992