Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004, Computer Networks
Several objectives have been identified in developing the random early drop (RED): decreasing queueing delay, increasing throughput, and increasing fairness between short and long lived connections. It has been believed that indeed the drop probability of a packet in RED does not depend on the size of the file to which it belongs. In this paper we study the fairness properties of RED where fairness is taken with respect to the size of the transferred file. We focus on short lived TCP sessions. Our findings are that (i) in terms of loss probabilities, RED is unfair: it favors short sessions, (ii) RED is fairer in terms of the average throughput of a session (as a function of its size) than in terms of loss probabilities. We study various loading regimes, with various versions of RED.
2014
— The paper describes how the use of ‘drop-biasing’, a tech-nique to control the distribution of the gap between consecutive packet losses in random drop queues (such as RED) can be used to reduce the variabil-ity of the queue occupancy with TCP traffic. Reducing the variance of the queue occupancy reduces delay jitter for buffered packets, as well as decreases the likelihood of buffer underflow. We find that modifying the packet drop probabilities to ensure a minimum separation between con-secutive packet drops serves to decrease the variability in the queue occu-pancy. This is really achieved as a result of the increased negative corre-lation among the congestion windows of the constituent TCP flows. Such negative correlation explains why the use of simple drop-biasing strategies can reduce the queue variability without increasing the likelihood of bursts of packet losses. The results of our investigations have relevance for the de-sign and deployment of RED-like algorithms for co...
2000
The paper describes how the use of 'drop-biasing', a technique to control the distribution of the gap between consecutive packet losses in random drop queues (such as RED) can be used to reduce the variability of the queue occupancy with TCP traffic. Reducing the variance of the queue occupancy reduces delay jitter for buffered packets, as well as decreases the likelihood of buffer underflow. We find that modifying the packet drop probabilities to ensure a minimum separation between consecutive packet drops serves to decrease the variability in the queue occupancy. This is really achieved as a result of the increased negative correlation among the congestion windows of the constituent TCP flows. Such negative correlation explains why the use of simple drop-biasing strategies can reduce the queue variability without increasing the likelihood of bursts of packet losses. The results of our investigations have relevance for the design and deployment of RED-like algorithms for congestion control in the Internet.
2005
Under the TCP congestion control regime, heterogeneous flows, i.e., flows with different round-trip times (RTTs), that share the same bottleneck link will not attain equal portions of the available bandwidth. In fact, according to the TCP friendly formula [1], the throughput ratio of two flows is inversely proportional to the ratio of their RTTs. It has also been shown that TCP's unfairness to flows with longer RTTs is accentuated under loss synchronization. Well-known mechanisms to avoid synchronization are based on injecting randomness into the network, e.g., introducing background traffic, using random drop (as opposed to drop-tail queuing). In this paper, we show that, in high-speed networks, injecting bursty background traffic may actually lead to synchronization and result in unfairness to foreground TCP flows with longer RTTs. We observe that unfairness is especially severe in high-speed variants of TCP such as Scalable TCP (S-TCP) and HighSpeed TCP (HSTCP). We propose three different metrics to characterize traffic burstiness and show that these metrics are reliable predictors of TCP unfairness. Finally, we show that TCP unfairness (including TCP SACK, S-TCP, and HSTCP) in highspeed networks due to bursty background traffic can be mitigated through the use of random drop queuing disciplines (such as RED) at bottleneck routers.
The algorithm for controlling TCP Congestion as well as throughput is the main reason we can use the internet successfully today, despite the resource bottleneck or the very large and unforeseen access of users. There are several implementations of this protocol, where we can single out Vegas, Fack and Sack.Through simulations in NS-2, we were able to judge their performance in conditions of "congestion", arguing that Vegas was the most optimal.We have also highlighted the concept of fairness and unfairness among TCP types.The results of the respective simulations showed how "fair" these variants were in the use of bandwidth, where the Vegas type stands out for the fairer behavior.Finally, we focused on the effects of queuing algorithms, such as RED and the proposed algorithm, as well as their impact on fairness between TCP variants.
New Trends in Computer Networks, 2005
It is expected that the proportional fair (PF) scheduler will be used widely in cdma2000 1xEV-DO systems because it maximizes the sum of each user's utility, which is given by the logarithm of its average throughput. However, in terms of short-term average throughput, PF scheduler may lead to a large RTT variation. We analyze the impact of PF scheduler on TCP start-up behavior through NS-2 simulation. To show the impact of PF scheduling on TCP, we also analyze the packet transmission delay under the PF scheduling policy through mathematical model.
2016
Abstract—Two approximate techniques for analyzing the window size distribution of TCP flows sharing a RED-like bottleneck queue are pre-sented. Both methods presented first use a fixed point algorithm to ob-tain the mean window sizes of the flows, and the mean queue length in the bottleneck buffer. The simpler of the two methods then uses the ‘square root formula ’ for TCP; the other method is more complicated. More of-ten than not, the simpler method is slightly more accurate; this is proba-bly due to the fact that window sizes of the different flows are negatively correlated. Keywords—TCP, multiple, distribution, RED, queues. I.
Computer Communications, 2009
The short-term dynamics of competing high speed TCP flows can have strong impacts on their long-term fairness. This leads to severe problems for both the coexistence and the deployment feasibility of different proposals for the next generation networks. However, to our best knowledge, no root-cause analysis of this observation is available. This is the major motivation of our work. The contribution of the paper is twofold. First, we present our comprehensive performance evaluation results of both inter-and intra-protocol fairness behavior of different TCP versions to get an overall view of these protocols. The analysis has revealed not only the equilibrium behavior but also the transient characteristics besides the dynamic behavior. Second, we show the results of a root-cause analysis to get a deeper understanding in the case of some promising TCP versions. This study does not only fill the ''black holes", i.e. answers the questions which remained unanswered in some cases, but rather goes deeper and investigates questions which have never been asked yet. The work includes flow-level, packet-level, queueing and also spectral analyses. Three loss-based (HighSpeed TCP, Scalable TCP and BIC TCP) proposals and the delay-based FAST TCP are investigated in details with both ''dumb-bell" and ''parking-lot" topologies.
annals of telecommunications - annales des télécommunications, 2010
Fairness of competing transmission control protocol (TCP) flows is an integral and indispensable part of transport protocol design for next-generation, high-bandwidth-delay product networks. It is not just a protocol-intrinsic property but it could also have severe impact on quality of experience (QoE). In this paper, we revisit FAST TCP fairness behavior based on a comprehensive performance evaluation study. We demonstrate that FAST TCP with proper parameter settings can always achieve fair behavior with HighSpeed TCP and Scalable TCP. We also show that this behavior is a rather robust property of the protocol concerning different traffic mix or network topology. The dynamic behavior of reaching the fair equilibrium state can be different, which is demonstrated in the paper. Our study also emphasizes the important need for finding a dynamic sensitive fairness metric for performance evaluation of transport protocols for next-generation, high-bandwidth-delay product networks.
Abstract Bandwidth sharing between multiple TCP connections has been studied under the assumption that the windows of the different connections vary in a synchronized manner. This synchronization is a main result of the deployment of Drop Tail buffers in network routers. The deployment of active queue management techniques such as RED will alleviate this problem of synchronization. We develop in this paper a mathematical model to study how the bottleneck bandwidth will be shared if TCP windows are not synchronized.
… 2007. Ad Hoc and …, 2010
Proceedings. Eleventh International Conference on Computer Communications and Networks
The bias of TCP's congestion avoidance mechanism against connections with long Round Trip Times (RTT) is a known fact. Many alternate congestion avoidance policies have been proposed to improve the fairness. Though the proposed policies attempt to address and resolve the fairness issue, we show in this paper that they tend to be harmful to connections that traverse either slow links like 56Kbps modem links or Long Thin Networks (LTN) like the cellular links. We specifically consider a very common scenario where the last-hop link connecting the end user (i.e., client) may be a slow link or a LTN. In this case, the TCP sender (i.e., server) is usually unaware of the network path and when it is equipped with such policies it would cause an increased probing into the network in a quest for the non-existent bandwidth. In this paper we conduct simulation studies to evaluate the impact of the proposed policies on the connections that traverse either slow links or LTNs. We notice that the proposed policies cause increased buffer overflows at the last-hop router, thereby degrading the performance of the connection. We study the impact of increased buffer sizes at lasthop router and also the effect of advertising a limited receive window. We show that the impact of the policies, on connections traversing slow links or LTNs, can be reduced by selectively disabling the policies.
at INET, 2000
TCP Vegas version is expected to achieve higher throughput than TCP Tahoe and Reno versions, which are currently used in the Internet. However, we need to consider a migration path of TCP Vegas when it is deployed in the Internet. In this paper, we focus on the situation where multiple TCP Reno and Vegas connections coexist at the bottleneck router, by which the fairness property is investigated to seek the possibility of future deployment of TCP Vegas. We consider drop-tail and RED (Random Early Detection) algorithms as buffering discipline at the router, and evaluate the effect of RED algorithm on fairness enhancement. From the analysis and the simulation results, we have found the results that the fairness between TCP Reno and Vegas can not be kept at all with drop-tail router. Although RED algorithm improves the fairness to some degree, there are inevitable trade-off between fairness and throughput.
1999
Two approximate techniques for analyzing the window size distribution of TCP flows sharing a RED-like bottleneck queue are presented. Both methods presented first use a fixed point algorithm to obtain the mean window sizes of the flows, and the mean queue length in the bottleneck buffer. The simpler of the two methods then uses the 'square root formula' for TCP; the other method is more complicated. More often than not, the simpler method is slightly more accurate; this is probably due to the fact that window sizes of the different flows are negatively correlated.
2015
This thesis discusses the Random Early Detection (RED) algorithm, proposed by Sally Floyd, used for congestion avoidance in computer networking, how existing algorithms compare to this approach and the configuration and implementation of the Weighted Random Early Detection (WRED) variation. RED uses a probability approach in order to calculate the probability that a packet will be dropped before periods of high congestion, relative to the minimum and maximum queue threshold, average queue length, packet size and the number of packets since the last drop.
International Journal of Computer Applications, 2011
In a network, the most common transport protocol is the Transmission Control Protocol. The Transmission Control Protocol comes in many variants like TCP, Tahoe, Reno, NewReno, Vegas, STCP and so on. Each of these variants would work differently in different networks according to the parameters of that network. On the other hand, there are mainly four common routing protocols used in networks like DSDV, DSR, AODV and TORA. In this paper, we have simulated different networks with differing parameters to analyze the behavior of the most common protocols DSDV and AODV with different variants of TCP. By creating different networks in ns2 simulator, we could deeply analyze the behavior of the protocols with these TCP variants in the basis of the amount of packet drops in each case. The lesser the amount of drops the better the algorithm. This paper implicitly analyses which TCP variant has lesser drop rates with which routing protocol.
International Journal of Advanced Computer Science and Applications, 2016
TCP (Transmission Control Protocol) is the main transport protocol used in high speed network. In the OSI Model, TCP exists in the Transport Layer and it serves as a connection-oriented protocol which performs handshaking to create a connection. In addition, TCP provides end-to-end reliability. There are different standard variants of TCP (e.g. TCP Reno, TCP NewReno etc.)which implement mechanisms to dynamically control the size of congestion window but they do not have any control on the sending time of successive packets. TCP pacing introduces the concept of controlling the packet sending time at TCP sources to reduce packet loss in a bursty traffic network. Randomized TCP is a new TCP pacing scheme which has shown better performance (considering throughput, fairness) over other TCP variants in bursty networks. The end-to-end delay of Randomized TCP is a very important performance measure which has not yet been addressed. In the current high speed networks, it is increasingly important to have mechanisms that keep end-to-end to delay within an acceptable range. In this paper, we present the performance evaluation of end-to-end delay of Randomized TCP. To this end, we have used an analytical and a simulation model to characterize the end-to-end delay performance of Randomized TCP.
IEEE Communications Letters, 2005
This letter considers problems of unfairness and excessive variations of the router queue associated with FAST TCP operations due to inaccurate estimation of the round-trip propagation delay. Using a simple example, we explain unfairness in FAST TCP caused by this inaccurate estimation. We also present analytical and simulation performance studies and show that improving this estimation by giving the first packet in every flow priority can improve fairness and reduce queuing variations.
Reinventing the Web
Congestion is an un-avoiding issue of networking, and many attempts and mechanisms have been devised to avoid and control congestion in diverse ways. Random Early Discard (RED) is one of such type of algorithm that applies the techniques of Active Queue Management (AQM) to prevent and control congestion and to provide a range of Internet performance facilities. In this chapter, performance of RED algorithm has been measured from different point of views. RED works with Transmission Control Protocol (TCP), and since TCP has several variants, the authors investigated which versions of TCP behave well with RED in terms of few network parameters. Also, performance of RED has been compared with its counterpart Drop Tail algorithm. These statistics are immensely necessary to select the best protocol for Internet performance optimization.
2004
While randomly generated sequences of short-lived TCP flows may provide some reductions (up to 10%) in the throughput of the long-lived flows, we generate scenarios that cause much greater reductions (> 85%). Similar scenarios achieve similar reductions for several TCP variants (Tahoe, Reno, New Reno, Sack), and for different packet drop policies (DropTail and RED). Index Terms—Simulations, Experimentations with real networks/Testbeds
This paper presents one approach to modeling of TCP connection during the slow start phase. Such modeling can be used for TCP connection analysis with reduced computation complexity compared to the packet-level simulators. Proposed model is validated by comparing the results obtained from ns-2 simulations.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.