Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009, Proceedings of the 9th ACM SIGCOMM conference on Internet measurement
…
14 pages
1 file
Since the last in-depth studies of measured TCP traffic some 6-8 years ago, the Internet has experienced significant changes, including the rapid deployment of backbone links with 1-2 orders of magnitude more capacity, the emergence of bandwidth-intensive streaming applications, and the massive penetration of new TCP variants. These and other changes beg the question whether the characteristics of measured TCP traffic in today's Internet reflect these changes or have largely remained the same. To answer this question, we collected and analyzed packet traces from a number of Internet backbone and access links, focused on the "heavy-hitter" flows responsible for the majority of traffic. Next we analyzed their within-flow packet dynamics, and observed the following features: (1) in one of our datasets, up to 15.8% of flows have an initial congestion window (ICW) size larger than the upper bound specified by RFC 3390. (2) Among flows that encounter retransmission rates of more than 10%, 5% of them exhibit irregular retransmission behavior where the sender does not slow down its sending rate during retransmissions. (3) TCP flow clocking (i.e., regular spacing between flights of packets) can be caused by both RTT and non-RTT factors such as application or link layer, and 60% of flows studied show no pronounced flow clocking. To arrive at these findings, we developed novel techniques for analyzing unidirectional TCP flows, including a technique for inferring ICW size, a method for detecting irregular retransmissions, and a new approach for accurately extracting flow clocks.
… Infrastructure for the …
Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2019
In 2016, Google proposed and deployed a new TCP variant called BBR. BBR represents a major departure from traditional congestion-window-based congestion control. Instead of using loss as a congestion signal, BBR uses estimates of the bandwidth and round-trip delays to regulate its sending rate. The last major study on the distribution of TCP variants on the Internet was done in 2011, so it is timely to conduct a new census given the recent developments around BBR. To this end, we designed and implemented Gordon, a tool that allows us to measure the exact congestion window (cwnd) corresponding to each successive RTT in the TCP connection response of a congestion control algorithm. To compare a measured flow to the known variants, we created a localized bottleneck where we can introduce a variety of network changes like loss events, bandwidth change, and increased delay, and normalize all measurements by RTT. An offline classifier is used to identify the TCP variant based on the cwnd ...
2011
The Internet is constantly changing and evolving. In this thesis the behaviour of various aspects of the implementation of TCP underlying the Internet are measured. These include measures of Initial Congestion Window (ICW), type of reaction to loss, Selective Acknowledgment (SACK) support, Explicit Congestion Notification (ECN) support. We develop a new method to measure congestion window reduction due to three duplicate ACK inferred loss. In a previous study 94% of classified servers showed window halving, whereas we found that 50% of classified servers exhibited Binary Increase Congestion control (BIC) or Cubic style behaviour, which is a departure from a Request For Comments (RFC) requirement to reduce the congestion window by at least 50%. ECN is predicted to improve Internet performance, but previous studies have revealed a low support for it 0.5%, and ECN connections failed at a high rate due to middlebox interference 9%; in this thesis we show a steady increase over time of E...
2011
Up-to-date TCP traffic characteristics are essential for research and development of protocols and applications. This paper presents recent trends observed in 70 measurements on backbone links from 2006 and 2009. First, we provide general characteristics such as packet size distributions and TCP option usage. We confirm previous observations such as the dominance of TCP as transport and higher utilization of TCP options. Next, we look at out-of-sequence (OOS) TCP segments. OOS segments often have negative effects on TCP performance, and therefore require special consideration. While the total fraction of OOS segments is stable in our measurements, we observe a significant decrease in OOS due to packet reordering (from 22.5% to 5.2% of all OOS segments). We verify that this development is a general trend in our measurements and not caused by single hosts/networks or special temporal events. Our findings are surprising as many researchers previously have speculated in an increased amount of reordering.
Computer Communication Review, 1997
We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales. In x 3 we characterize unusual network behavior: out-of-order delivery, replication, and packet corruption. Then in x 4 we discuss a robust algorithm for estimating the "bottleneck" bandwidth that limits a connection's maximum rate. This estimation is crucial for subsequent analysis because knowing the bottleneck rate lets us determine when the closely-spaced TCP data packets used for our network probes are correlated with each other. (We note that the stream of ack packets returned by the TCP data receiver in general is not correlated, due to the small size and larger spacing of the acks.) Once we can determine which probes were correlated and which not, we then can turn to analysis of end-to-end Internet packet loss (x 5) and delay (x 6). In x 7 we briefly summarize our findings, a number of which challenge commonly-held assumptions about network behavior.
2014
Arguably, understanding the behaviour of TCP is essential to understanding the behaviour of the whole Internet hence (1) the majority of traffic flows use it for their transportation, (2) it has been around since the inception of the global net thus showing remarkable scalability and robustness, (3) it has been a subject of many modifications in order to absorb technological innovations, and (4) through its self-clocking disposition has displayed flexibility and fairness. The heuristic nature of the TCP extensions and the inherit complexity of the Internet have provided the justification behind using testbeds and simulations in order to describe and study the nature of TCP and the effect of the protocol on the Net. However, having in mind that rudimentary TCP protocol is actually FSM, as well as acknowledging the need for analytical modelling that will supplement the empirical work and thus induce the necessary predictability, has lately induced an intensive research for mathematica...
International Conference on Internet Monitoring and Protection, 2016
Although traffic in the Internet increases largely, it is sometimes pointed out that a small number of giant users exhaust large part of network bandwidth. In order to resolve such problems, a practical way is to suppress large traffic flows which do not conform to Transmission Control Protocol (TCP) congestion control algorithms. For this purpose, the network operators need to infer congestion control algorithms of individual TCP flows using packet traces collected passively in the middle of networks. We proposed, in our previous paper, a new scheme to characterize TCP algorithms from packet traces. It estimates the congestion window size (cwnd) at a TCP sender at round-trip time intervals, and specifies the cwnd growth as a function of the estimated value of cwnd. We showed that our previous scheme can characterize most TCP algorithms introduced recently. In an actual network environment, however, a packet trace captured over some link, especially a backbone link, often contains only unidirectional TCP segments due to the asymmetric routing. In this case, it is difficult to estimate the cwnd itself, and a new analysis scheme is required. This paper shows a study on how to characterize the TCP congestion control algorithms from unidirectional packet traces. We use a data size transmitted during a short period of time and, using it, we apply our former scheme to the unidirectional trace. This paper shows the results that we apply the proposed method to popular TCP algorithms, such as TCP Reno and CUBIC TCP.
1998
The rapid growth of the World Wide Web in recent years has caused a significant shift in the composition of Internet traffic. Although past work has studied the behavior of TCP dynamics in the context of bulk-transfer applications and some studies have begun to investigate the interactions of TCP and HTTP, few have used extensive realworld traffic traces to examine the problem. This interaction is interesting because of the way in which current Web browsers use TCP connections: multiple concurrent short connections from a single host.
13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
Micro-bursts from TCP flows are investigated. The chiprate is introduced and used to quantify the short-term bitrate of TCP flows. This paper examines packets with chiprates above the 90 th percentile. The examination is performed over time scales ranging from 244 µs to 125 ms. Two issues are addressed, the impact and the causes of the micro-bursts. It is found that the packets with high chiprate experience an elevated probability of burst losses. For example, the probability of a burst loss is up to 10 times larger for packets sent in micro-bursts. Furthermore, in some settings, these packets experience higher loss rate in general. It is also found that micro-bursts cause an increase in queuing delay. The causes of these micro-bursts are investigated. One finding is that at short-time scales, ACK clocking, which should reduce micro-bursts, is not functioning correctly. For example, in some cases, most of the packets contained in micro-bursts are ACKed at a rate that is less than half of the data rate. * This work was prepared through collaborative participation in the Collaborative Technology Alliance for Communications and Networks sponsored by the U.S. Army Research Laboratory under Cooperative Agreement DAAD19-01-2-0011. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. This paper meets this need. First, the impact of the micro-burst is studied. It is shown that the micro-burst can result in dramatic increase in loss probability, especially, the probability of burst-losses. Further, evidence is provided that burst-losses impact not only the flow that causes the burst-losses but also other flows in the network. It is observed that these micro-bursts also impact queueing delay. Second, the causes of micro-bursts are investigated. It is found that ACK clocking is not functioning at short timescales. This strongly suggests that ACK clocking should not be relied upon to limit the data rate at short timescales and that TCP pacing be deployed, especially on servers with high-speed access links. While average data rate is relatively straightforward to
2013 IEEE International Conference on Communications (ICC), 2013
This paper considers fundamental measurements which drive TCP flows: throughput, RTT and loss. It is clear that throughput is, in some sense, a function of both RTT and loss. In their seminal paper Padyhe et al begin with a mathematical model of the TCP sliding window evolution process and come up with an equation showing that TCP throughput is (roughly) proportional to 1/RT T √ p where p is the probability of packet loss. Their equation is shown to be consistent with data gathered on several links. This paper takes the opposite approach and analyses a large number of packet traces from well-known sources in order to create a data-driven estimate of the functions which relate TCP, loss and RTT. Regression analysis is used to fit models to connect the quantities. The fitted models show different behaviour from that expected in [1].
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies., 2005
Computer Networks, 2002
IEEE Global Telecommunications Conference, 2004. GLOBECOM '04., 2004
Computer Networks, 2017
Proceedings of the 11th ACM Conference on Emerging Networking Experiments and Technologies
Proceedings of the …, 2005
Computer Communications, 2005
cavendish.homedns.org
IEEE International Conference on Communications, 2005. ICC 2005. 2005, 2005
Proceedings of the second ACM SIGCOMM Workshop on Internet measurment - IMW '02, 2002
Lecture Notes in Computer Science, 2012
Networks 2008 - The 13th International Telecommunications Network Strategy and Planning Symposium, 2008
Performance Evaluation, 2006