Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017
TCP adaptation and retransmission strategies provide robust advantages by abstracting the development of applications on the Internet from the development of lower layers. However, the abstraction hides useful low-level information from researchers and administrators who could otherwise use the insights from the performance of TCP and lower layers to diagnose problems and improve TCP performance. Though common tools exist for manual analysis of TCP performance, some of these tools are outdated or arduous to easily use. The primary contribution of this thesis is Vessel, a tool to supplement existing tools with per-connection instrumentation, improving the ability to perform network analysis tests while providing sufficiently detailed information to identify differences with different tests. Vessel leverages the Extended Berkeley Packet Filter and Linux network namespaces. We demonstrate the utility of the tool by analyzing TCP flows associated with competing web-based speed measureme...
IEICE Transactions on Communications, 2004
Since the TCP is the transport protocol for most Internet applications, evaluation of TCP throughput is important. In this paper, we establish a framework of evaluating TCP throughput by simple measurement. TCP throughput is generally measured by sending TCP traffic and monitoring its arrival or using data from captured packets, neither of which suits our proposal because of heavy loads and lack of scalability. While there has been much research into the analytical modeling of TCP behavior, this has not been concerned with the relationship between modeling and measurement. We thus propose a lightweight method for the evaluation of TCP throughput by associating measurement with TCP modeling. Our proposal is free from the defects of conventional methods, since measurement is performed to obtain the input parameters required to calculate TCP throughput. Numerical examples show the proposed framework's effectiveness.
RFC, 2011
This memo describes a methodology for measuring sustained TCP throughput performance in an end-to-end managed network environment. This memo is intended to provide a practical approach to help users validate the TCP layer performance of a managed network, which should provide a better indication of end-user application level experience. In the methodology, various TCP and network parameters are identified that should be tested as part of the network verification at the TCP layer.
Modeling, Analysis, and Simulation of Computer …, 2005
We propose a model of TCP performance that captures the behavior of a set of network paths with diverse characteristics. The model uses more parameters than others, but we show that each feature of the model describes an effect that is important for at least some paths. We show that the model is sufficient to describe the datasets we collected with acceptable accuracy. Finally, we show that the model's parameters can be estimated using simple, application-level measurements.
Proceedings of the 2015 Federated Conference on Computer Science and Information Systems, 2015
Transmission Control Protocol (TCP) is still used by vast majority of Internet applications. However, the huge increase in bandwidth availability and consumption during the last decade has stimulated the evolution of TCP and introduction of new versions that are more suited for high speed networks. Many factors can influence the performance of TCP protocol, starting from scarcity of network resources, through client or server misconfiguration, to internal limitations of applications. Proper identification of the TCP performance bottlenecks is therefore an important challenge for network operators. In the paper we proposed the methodology for finding root causes of througput degradation in TCP connections based on passive measurements. This methodology was verified by experiments conducted in a live network with 4G wireless Internet access.
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement, 2009
Since the last in-depth studies of measured TCP traffic some 6-8 years ago, the Internet has experienced significant changes, including the rapid deployment of backbone links with 1-2 orders of magnitude more capacity, the emergence of bandwidth-intensive streaming applications, and the massive penetration of new TCP variants. These and other changes beg the question whether the characteristics of measured TCP traffic in today's Internet reflect these changes or have largely remained the same. To answer this question, we collected and analyzed packet traces from a number of Internet backbone and access links, focused on the "heavy-hitter" flows responsible for the majority of traffic. Next we analyzed their within-flow packet dynamics, and observed the following features: (1) in one of our datasets, up to 15.8% of flows have an initial congestion window (ICW) size larger than the upper bound specified by RFC 3390. (2) Among flows that encounter retransmission rates of more than 10%, 5% of them exhibit irregular retransmission behavior where the sender does not slow down its sending rate during retransmissions. (3) TCP flow clocking (i.e., regular spacing between flights of packets) can be caused by both RTT and non-RTT factors such as application or link layer, and 60% of flows studied show no pronounced flow clocking. To arrive at these findings, we developed novel techniques for analyzing unidirectional TCP flows, including a technique for inferring ICW size, a method for detecting irregular retransmissions, and a new approach for accurately extracting flow clocks.
Computer Communications, 2005
This paper presents a new methodology to evaluate and graphically represent TCP performance in a web environment. The main novelty of this work is that it focuses on web environments involving a large number of connections, where the traffic model is extracted from real traces. In these cases, conventional tools are not efficient to handle the complexity of the analysis. The proposed framework includes: (i) a set of representative web browsing scenarios affected by different types of losses; (ii) a new analysis methodology to cope with the huge data volume related to the simulated connections and; (iii) a new graphical representation to allow an easy visualisation of the simulation results. To test the proposed methodology, an evaluation of the impact of two representative TCP configuration parameters for web traffic has been included.
2007 Winter Simulation Conference, 2007
The TCP models in ns-2 have been validated and are widely used in network research. They are however not aimed at producing results consistent with a TCP implementation, they are rather designed to be a general model for TCP congestion control. The Network Simulation Cradle makes real world TCP implementations available to ns-2: Linux, FreeBSD and OpenBSD can all be simulated as easily as using the original simplified models. These simulated TCP implementations can be validated by directly comparing packet traces from simulations to traces measured from a real network. We describe the Network Simulation Cradle, present packet trace comparison results showing the high degree of accuracy possible when simulating with real TCP implementations and briefly show how this is reflected in a simulation study of TCP throughput.
2008
There is a growing interest in the use of variants of the Transmission Control Protocol (TCP) in high-speed networks. ns-2 has implementations of many of these high-speed TCP variants, as does Linux. ns-2, through an extension, permits the incorporation of Linux TCP code within ns-2 simulations. As these TCP variants become more widely used, users are concerned about how these different variants of TCP might interact in a real network environment -how fair are these protocol variants to each other (in their use of the available capacity) when sharing the same network. Typically, the answer to this question might be sought through simulation and/or by use of an experimental testbed. So, we compare with TCP NewReno the fairness of the congestion control algorithms for 5 high-speed TCP variants -BIC, Cubic, Scalable, High-Speed and Hamilton -on both ns-2 and on an experimental testbed running Linux. In both cases, we use the same TCP code from Linux. We observe some differences between the behaviour of these TCP variants when comparing the testbed results to the results from ns-2, but also note that there is generally good agreement.
2004
The performance of popular Internet Web services is governed by a complex combination of server behavior, network characteristics and client workload -all interacting through the actions of the underlying transport control protocol (TCP). Consequently, even small changes to TCP or to the network infrastructure can have significant impact on end-to-end performance, yet at the same time it is challenging for service administrators to predict what that impact will be. In this paper we describe the implementation of a tool called Monkey that is designed to help address such questions. Monkey collects live TCP trace data near a server, distills key aspects of each connection (e.g., network delay, bottleneck bandwidth, server delays, etc.) and then is able to faithfully replay the client workload in a new setting. Using Monkey, one can easily evaluate the effects of different network implementations or protocol optimizations in a controlled fashion, without the limitations of synthetic workloads or the lack of reproducibility of live user traffic. Using realistic network traces from the Google search site, we show that Monkey is able to replay traces with a high degree of accuracy and can be used to predict the impact of changes to the TCP stack.
2011
The Internet is constantly changing and evolving. In this thesis the behaviour of various aspects of the implementation of TCP underlying the Internet are measured. These include measures of Initial Congestion Window (ICW), type of reaction to loss, Selective Acknowledgment (SACK) support, Explicit Congestion Notification (ECN) support. We develop a new method to measure congestion window reduction due to three duplicate ACK inferred loss. In a previous study 94% of classified servers showed window halving, whereas we found that 50% of classified servers exhibited Binary Increase Congestion control (BIC) or Cubic style behaviour, which is a departure from a Request For Comments (RFC) requirement to reduce the congestion window by at least 50%. ECN is predicted to improve Internet performance, but previous studies have revealed a low support for it 0.5%, and ECN connections failed at a high rate due to middlebox interference 9%; in this thesis we show a steady increase over time of E...
Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2019
In 2016, Google proposed and deployed a new TCP variant called BBR. BBR represents a major departure from traditional congestion-window-based congestion control. Instead of using loss as a congestion signal, BBR uses estimates of the bandwidth and round-trip delays to regulate its sending rate. The last major study on the distribution of TCP variants on the Internet was done in 2011, so it is timely to conduct a new census given the recent developments around BBR. To this end, we designed and implemented Gordon, a tool that allows us to measure the exact congestion window (cwnd) corresponding to each successive RTT in the TCP connection response of a congestion control algorithm. To compare a measured flow to the known variants, we created a localized bottleneck where we can introduce a variety of network changes like loss events, bandwidth change, and increased delay, and normalize all measurements by RTT. An offline classifier is used to identify the TCP variant based on the cwnd ...
2014
Arguably, understanding the behaviour of TCP is essential to understanding the behaviour of the whole Internet hence (1) the majority of traffic flows use it for their transportation, (2) it has been around since the inception of the global net thus showing remarkable scalability and robustness, (3) it has been a subject of many modifications in order to absorb technological innovations, and (4) through its self-clocking disposition has displayed flexibility and fairness. The heuristic nature of the TCP extensions and the inherit complexity of the Internet have provided the justification behind using testbeds and simulations in order to describe and study the nature of TCP and the effect of the protocol on the Net. However, having in mind that rudimentary TCP protocol is actually FSM, as well as acknowledging the need for analytical modelling that will supplement the empirical work and thus induce the necessary predictability, has lately induced an intensive research for mathematica...
2013 IEEE International Conference on Communications (ICC), 2013
This paper considers fundamental measurements which drive TCP flows: throughput, RTT and loss. It is clear that throughput is, in some sense, a function of both RTT and loss. In their seminal paper Padyhe et al begin with a mathematical model of the TCP sliding window evolution process and come up with an equation showing that TCP throughput is (roughly) proportional to 1/RT T √ p where p is the probability of packet loss. Their equation is shown to be consistent with data gathered on several links. This paper takes the opposite approach and analyses a large number of packet traces from well-known sources in order to create a data-driven estimate of the functions which relate TCP, loss and RTT. Regression analysis is used to fit models to connect the quantities. The fitted models show different behaviour from that expected in [1].
2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops, 2009
The ability to establish an objective comparison between high-performance TCP variants under diverse networking conditions and to obtain a quantitative assessment of their impact on the global network traffic is essential to a communitywide understanding of various design approaches. Small-scale experiments are insufficient for a comprehensive study of these TCP variants. We propose a TCP performance evaluation testbed, called SVEET, on which real implementations of the TCP variants can be accurately evaluated under diverse network configurations and workloads in large-scale network settings. This testbed combines real-time immersive simulation, emulation, machine and time virtualization techniques. We validate the testbed via extensive experiments and assess its capabilities through case studies involving real web services.
… Infrastructure for the …
2020
Over the last decade, in an attempt to improve end-user performance, the community has proposed a multitude of changes to the networking stack’s configuration parameters. These changes range from new default values (e.g., initial congestion window size) to the development of new configuration options (e.g., congestion control protocols). While the networking community has performed extensive studies on the performance implications of configuration optimizations, these studies have been performed in isolation and, moreover, there are no holistic and general tools to infer, analyze, and understand the actual configuration choices employed by online content providers and content distribution networks. To this end, we present Inspector Gadget, a flexible and accurate framework for characterizing and fingerprinting a web server’s networking’s stack configuration parameters. Inspector Gadget leverages domain-specific heuristics to reverse engineer configuration parameters and options. To ...
Proceedings of the …, 2005
Lecture Notes in Computer Science, 2007
Abstract. A TCP flow is congestion responsive because it reduces its send window upon the appearance of congestion. An aggregate of non- persistent TCP flows, however, may not be congestion responsive, depend- ing on whether the flow (or session) arrival process ...
2003
We propose a model of TCP performance that captures the behavior of a diverse set of network paths. The model uses more parameters than previous efforts, but we show that each feature of the model describes an effect that is important for at least some paths. We show that the model is sufficient to describe the datasets we collected with acceptable accuracy. Finally, we show that the model's parameters can be estimated using simple, application-level measurements.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.