Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, IEEE Communications Letters
Sparse Network Coding (SNC) is a promising technique for reducing the complexity of Random Linear Network Coding (RLNC), by selecting a sparse coefficient matrix to code the packets. However, the performance of SNC for the Average Decoding Delay (ADD) of the packets is still unknown. In this paper, we study the performance of ADD and propose a Markov Chain Model to analyze this SNC metric. This model provides a lower bound for decoding delay of a generation as well as a lower bound for decoding delay of a portion of a generation. Our results show that although RLNC provides a better decoding delay of an entire generation, SNC outperforms RLNC in terms of ADD per packet. Sparsity of the coefficient matrix is a key parameter for ADD per packet to transmit stream data. The proposed model enables us to select the appropriate degree of sparsity based on the required ADD. Numerical results validate that the proposed model would enable a precise evaluation of SNC technique behavior.
IEEE Access, 2017
While random linear network coding is known to improve network reliability and throughput, its high costs for delivering coding coefficients and decoding represent an obstacle where nodes have limited power to transmit and decode packets. In this paper, we propose sparse network codes for scenarios where low coding vector weights and low decoding cost are crucial. We consider generation-based network codes where source packets are grouped into overlapping subsets called generations, and coding is performed only on packets within the same generation in order to achieve sparseness and low complexity. A sparse code is proposed that is comprised of a precode and random overlapping generations. The code is shown to be much sparser than existing codes that enjoy similar code overhead. To efficiently decode the proposed code, a novel low-complexity overhead-optimized decoder is proposed where code sparsity is exploited through local processing and multiple rounds of pivoting. Through extensive simulation comparison with existing schemes, we show that short transmissions of the order of 10 2 − 10 3 source packets, a denomination convenient for many applications of interest, can be efficiently decoded by the proposed decoder. INDEX TERMS Network coding, sparse codes, random codes, generations, code overhead, efficient decoding.
arXiv (Cornell University), 2016
Random linear network coding (RLNC) in theory achieves the max-flow capacity of multicast networks, at the cost of high decoding complexity. To improve the performance-complexity tradeoff, we consider the design of sparse network codes. A generation-based strategy is employed in which source packets are grouped into overlapping subsets called generations. RLNC is performed only amongst packets belonging to the same generation throughout the network so that sparseness can be maintained. In this paper, generation-based network codes with low reception overheads and decoding costs are designed for transmitting of the order of 10 2-10 3 source packets. A low-complexity overhead-optimized decoder is proposed that exploits "overlaps" between generations. The sparseness of the codes is exploited through local processing and multiple rounds of pivoting of the decoding matrix. To demonstrate the efficacy of our approach, codes comprising a binary precode, random overlapping generations, and binary RLNC are designed. The results show that our designs can achieve negligible code overheads at low decoding costs, and outperform existing network codes that use the generation based strategy.
IEEE Communications Letters, 2020
One of the by-products of Sparse Network Coding (SNC) is the ability to perform partial decoding, i.e., decoding some original packets prior to collecting all needed coded packets to decode the entire coded data. Due to this ability, SNC has been recently used as a technique for reducing the Average Decoding Delay (ADD) per packet in real-time multimedia applications. This study focuses on characterizing the ADD per packet for SNC considering the impact of finite field size. We present a Markov Chain model that allows us to determine lower bounds on the mean number of transmissions required to decode a fraction of a generation and the ADD per packet of the generation. We validate our model using simulations and show that the smaller finite fields, e.g., q = 2 4 , outperform large finite fields, e.g., q = 2 32 , in regard to the ADD per packet and provide a better trade-off between the ADD per packet and the overall number of transmissions to decode a generation.
Network Coding is a promising approach to increase network throughput and robustness to facilitate high volume traffic. Performing network coding in dynamic network structures requires transmitting coding coefficients for information sinks to decode network coded packets. Compared to the packet sizes used in practical networks, the size of coefficient vectors can be significant. This paper exploits the properties of small and medium sized networks and proposes a novel approach to minimise the coefficient vector size of network coded packets. Simulation results exhibit better compression of coefficient vectors over existing algorithms for small and medium sized networks.
Sparse random linear network coding (SRLNC) is an attractive technique proposed in the literature to reduce the decoding complexity of random linear network coding. Recognizing the fact that the existing SRLNC schemes are not efficient in terms of the required reception overhead, we consider the problem of designing overhead-optimized SRLNC schemes. To this end, we introduce a new design of SRLNC scheme that enjoys very small reception overhead while maintaining the main benefit of SRLNC, i.e., its linear encoding/decoding complexity. We also provide a mathematical framework for the asymptotic analysis and design of this class of codes based on density evolution (DE) equations. To the best of our knowledge, this work introduces the first DE analysis in the context of network coding.
2015
Over the past decade, network coding (NC) has emerged as a new paradigm for data communications and has attracted much popularity and research interest in information and coding theory, networking, wireless communications and data storage. Random linear NC (RLNC) is a subclass of NC that has shown to be suitable for a wide range of applications thanks to its desirable properties, namely throughput-optimality, simple encoder design and efficient operation with minimum feedback requirements. However, for delay-sensitive applications, the mentioned advantages come with two main issues that may restrict RLNC usage in practice. First is the trade-off between the delay and throughput performances of RLNC, which can adversely affect the throughput-optimality of RLNC and hence the overall performance of RLNC. Second is the usage of feedback, where even if feedback is kept at minimum it can still incur large amount of delay and thus degrade the RLNC performance, if not optimized properly. In...
2011
Abstract Motivated by the noncoherent subspace coding approach and the low-complexity sparse coding approach to realize random linear network coding, we consider the problem of characterizing the probability of having a full rank (or nonsingular) square transfer matrix over a finite field, for which the probability of choosing the zero element is different from that of choosing a nonzero element.
In this paper, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size in an unreliable single-hop broadcast network setting. Our model consists of a source transmitting packets of a single flow to a set of N receivers over independent erasure channels. The source performs random linear network coding (RLNC) over K (coding window size) packets and broadcasts them to the receivers. We note that the broadcast throughput of RLNC must vanish with increasing N , for any fixed K. Hence, in contrast to other works in the literature, we investigate how the coding window size K must scale for increasing N . By appealing to the Central Limit Theorem, we approximate the Negative Binomial random variable arising in our analysis by a Gaussian random variable. We then obtain tight upper and lower bounds on the mean decoding delay and throughput in terms of K and N . Our analysis reveals that the coding window size of ln(N ) represents a phase transition rate below which the throughput converges to zero, and above which it converges to the broadcast capacity. Our numerical investigations show that the bounds obtained using the Gaussian approximation also apply to the real system performance, thus illustrating the accuracy of the analysis.
In the recent years, video streaming over wireless networks has become very popular. Users can join the network easily and move from one point to another with no restriction. On the other hand, there are some problems in providing smooth video playback in these nodes due to time-varying channels, obstacles and low upload and download bandwidth, especially in gadgets such as mobile phones. Although it is possible to degrade the side effects of these problems by using some efficient video compression techniques, better and more efficient solutions are required to cope with them. Random Network Coding (RNC) promises high video quality in receivers by increasing encoded packets diversity. However, decoding these packets is a challengeable subject, because of imposed delay to the system. This study compares two existent techniques in decoding in terms of computation time. The results show that the efficiencies of these methods depend on both the number and the size of the blocks. Moreover, the Gauss-Jordan elimination method provides better efficiency when the number of video blocks exceeds 256.
2008 5th IEEE International Conference on Mobile Ad Hoc and Sensor Systems, 2008
Network coding is a highly efficient data dissemination mechanism for wireless networks. Since network coded information can only be recovered after delivering a sufficient number of coded packets, the resulting decoding delay can become problematic for delay-sensitive applications such as real-time media streaming. Motivated by this observation, we consider several algorithms that minimize the decoding delay and analyze their performance by means of simulation. The algorithms differ both in the required information about the state of the neighbors' buffers and in the way this knowledge is used to decide which packets to combine through coding operations. Our results show that a greedy algorithm, whose encodings maximize the number of nodes at which a coded packet is immediately decodable significantly outperforms existing network coding protocols.
2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2019
Numerous applications require the sharing of data from each node on a network with every other node. In the case of Connected and Autonomous Vehicles (CAVs), it will be necessary for vehicles to update each other with their positions, manoeuvring intentions, and other telemetry data, despite shadowing caused by other vehicles. These applications require scalable, reliable, low latency communications, over challenging broadcast channels. In this article, we consider the allcast problem, of achieving multiple simultaneous network broadcasts, over a broadcast medium. We model slow fading using random graphs, and show that an allcast method based on sparse random linear network coding can achieve reliable allcast in a constant number of transmission rounds. We compare this with an uncoded baseline, which we show requires O(log(n)) transmission rounds. We justify and compare our analysis with extensive simulations.
Physical Communication, 2014
This paper discusses random linear network coding with and without the use of a Vandermonde matrix to obtain the coding coefficients. Performance comparisons of such random linear network coded networks with networks employing traditional store and forward technique are also provided. It is shown that random linear network coding using a Vandermonde matrix can improve the network utilization factor by reducing the overhead compared to random linear coding that does not use a Vandermonde matrix. Our numerical results show that random linear network coding with a Vandermonde matrix provides a considerable improvement in throughput and delay when compared to a network employing a traditional store and forward strategy. An inherent feature of random linear network coding which makes it possible to employ simple encryption techniques is also discussed.
IEEE Transactions on Information Theory, 2011
To reduce computational complexity and delay in randomized network coded content distribution, and for some other practical reasons, coding is not performed simultaneously over all content blocks, but over much smaller, possibly overlapping subsets of these blocks, known as generations. A penalty of this strategy is throughput reduction. To analyze the throughput loss, we model coding over generations with random generation scheduling as a coupon collector's brotherhood problem. This model enables us to derive the expected number of coded packets needed for successful decoding of the entire content as well as the probability of decoding failure (the latter only when generations do not overlap) and further, to quantify the tradeoff between computational complexity and throughput. Interestingly, with a moderate increase in the generation size, throughput quickly approaches link capacity. Overlaps between generations can further improve throughput substantially for relatively small generation sizes. Index Terms-network coding, rateless codes, coupon collector's problem I. INTRODUCTION A. Motivation: Coding over Disjoint and Overlapping Generations Random linear network coding was proposed in [1] for "robust, distributed transmission and compression of information in networks". Subsequently, the idea found a place in a peer-to-peer(P2P) file distribution system Avalanche [2] from Microsoft. In P2P systems such as BitTorrent, content distribution involves fragmenting the content at its source, and using swarming techniques to disseminate the fragments among peers. Systems such as Avalanche, instead, circulate linear combinations of content fragments, which can be generated by any peer. The motivation behind such a scheme is that, it is hard for peers to make optimal decisions on the scheduling of fragments based on their limited local vision, whereas when fragments are linearly combined at each node, topology diversity is implanted inherently in the data flows and can be exploited without further coordination .
Problems of Information Transmission, 2010
We consider the decoding for Silva-Kschischang-Kötter random network codes based on Gabidulin's rank-metric codes. The model of a random network coding channel can be reduced to transmitting matrices of a rank code through a channel introducing three types of additive errors. The first type is called random rank errors. To describe other types, the notions of generalized row erasures and generalized column erasures are introduced. An algorithm for simultaneous correction of rank errors and generalized erasures is presented. An example is given.
IEEE Transactions on Communications, 2000
Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients and study the benefits of inter-session network coding, when network nodes have the opportunity to combine packets from different sources. In particular, we propose a novel framework for optimal rate allocation in inter-session network coding systems. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions.
2010 Information Theory and Applications Workshop, ITA 2010 - Conference Proceedings, 2010
Understanding the delay behavior of network coding with a fixed number of receivers, small field sizes and a limited number of encoded symbols is a key step towards its applicability in real-time communication systems with stringent delay constraints. Previous results are typically asymptotic in nature and focus mainly on the average delay performance. Seeking to characterize the complete delay distribution of random linear network coding, we present a brute-force methodology that is feasible for up to four receivers, limited field and generation sizes. The key idea is to fix the pattern of packet erasures and to try out all possible encodings for various system and channel parameters. Our findings, which are valid for both decoding delay and ordered-delivery delay, can be used to optimize network coding protocols with respect not only to their average but also to their worst-case performance.
IEEE Transactions on Communications, 2000
We study joint network and channel code design to optimize delay performance. Here the delay is the transmission time of information packets from a source to sinks without considering queuing effects. In our systems, network codes (network layer) are on top of channel codes (physical layer) which are disturbed by noise. Network codes run in a rateless random method, and thus have erasure-correction capability. For the constraint of finite transmission time, transmission errors are inevitable in the physical layer. A detection error in the physical layer means an erasure of network codewords. For the analysis, we model the delay of each information generation in the network layer as independent, identically distributed random variables. The calculation approaches for delay measures are investigated for coded erasure networks. We show how to evaluate the rate and erasure probability of a set of channels belonging to one cut. We also show that the min-cut determines the decoding error probability in the sinks if the number of information packets is large. We observe that for a given amount of source information, larger packet length leads to fewer packets to be transmitted but higher physical-layer detection error probabilities. Further, longer transmission time (delay) in the physical-layer causes smaller detection error probability at the physical layer. Thus, both parameters have opposite impacts on the physical and network layer, considering delay. We should find the optimal values of them in a cross-layer approach. We then formulate the problems of optimizing delay performance, and discuss solutions for them.
2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), 2012
In this paper, we propose a novel opportunistic decoding scheme for network coding decoder which significantly reduces the decoder complexity and increases the throughput. Network coding was proposed to improve the network throughput and reliability, especially for multicast transmissions. Although network coding increases the network performance, the complexity of the network coding decoder algorithm is still high, especially for higher dimensional finite fields or larger network codes. Different software and hardware approaches were proposed to accelerate the decoding algorithm, but the decoder remains to be the bottleneck for high speed data transmission. We propose a novel decoding scheme which exploits the structure of the network coding matrix to reduce the network decoder complexity and improve throughput. We also implemented the proposed scheme on Virtex 7 FPGA and compared our implementation to the widely used Gaussian elimination.
2011
In this paper, we prove the existence of capacity achieving linear codes with random binary sparse generating matrices. The results on the existence of capacity achieving linear codes in the literature are limited to the random binary codes with equal probability generating matrix elements and sparse parity-check matrices. Moreover, the codes with sparse generating matrices reported in the literature are not proved to be capacity achieving.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.