Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002
This paper presents Daytona, a user-level TCP stack for Linux. A user-level TCP stack can be an invaluable tool for TCP performance research, network performance diagnosis, rapid prototyping and testing of new optimizations and enhancements to the TCP protocol, and as a tool for creating adaptive application-level overlays. We present the design and implementation of Daytona, and also describe several projects that are using Daytona in a rich variety of contexts, indicating its suitability as an open-source project.
2007 Winter Simulation Conference, 2007
The TCP models in ns-2 have been validated and are widely used in network research. They are however not aimed at producing results consistent with a TCP implementation, they are rather designed to be a general model for TCP congestion control. The Network Simulation Cradle makes real world TCP implementations available to ns-2: Linux, FreeBSD and OpenBSD can all be simulated as easily as using the original simplified models. These simulated TCP implementations can be validated by directly comparing packet traces from simulations to traces measured from a real network. We describe the Network Simulation Cradle, present packet trace comparison results showing the high degree of accuracy possible when simulating with real TCP implementations and briefly show how this is reflected in a simulation study of TCP throughput.
2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops, 2009
The ability to establish an objective comparison between high-performance TCP variants under diverse networking conditions and to obtain a quantitative assessment of their impact on the global network traffic is essential to a communitywide understanding of various design approaches. Small-scale experiments are insufficient for a comprehensive study of these TCP variants. We propose a TCP performance evaluation testbed, called SVEET, on which real implementations of the TCP variants can be accurately evaluated under diverse network configurations and workloads in large-scale network settings. This testbed combines real-time immersive simulation, emulation, machine and time virtualization techniques. We validate the testbed via extensive experiments and assess its capabilities through case studies involving real web services.
2000
The paper presents the experimental evaluation of the existing TCP implementations: Tahoe without Fast Retransmit, Reno, New-Reno. The short time analysis involved a software tool called TBIT (TCP Behavior Inference Tool), which was designed by AT&T Center for Internet Research. It generates short TCP traffic (about 25 segments), with the 13th and the 16th segments intentionally dropped. Depending on the
Ninth IEEE International Symposium on Multimedia (ISM 2007), 2007
Real-time delivery of time-dependent data over the Internet is challenging. UDP has often been used to transport data in a timely manner, but its lack of congestion control is often criticized. This criticism is a reason that the vast majority of applications today use TCP. The downside of this is that TCP has problems with the timely delivery of data. A transport protocol that adds congestion control to an otherwise UDP-like behaviour is DCCP. For this protocol, late data choice (LDC) [8] has been proposed to allow adaptive applications control over data packets up to the actual transmission time. We find, however, that application developers appreciate other TCP features as well, such as its reliability. We have therefore implemented and tested the LDC ideas for TCP. It allows the application to modify or drop packets that have been handed to TCP until they are actually transmitted to the network. This is achieved with a shared packet ring and indexes to hold the current status. Our experiments show that we can send more useful data with LDC than without in a streaming scenario. We can therefore claim that we achieve a better utilization of the throughput, giving us a higher goodput with LDC than without.
2009
The main objective of this thesis is to provide a framework for evaluating the correctness of TCP implementations. We use this framework in order to assess the completeness and correctness of four widely available implementations: DEC OSF 3.2, FreeBSD 2.2.2, Windows 95 and Windows NT 4.0. Through our evaluation, we will attempt to determine which TCP mechanisms each implementation provides and whether these mechanisms are implemented correctly according to TCP standards. We also show that our approach can be extended to evaluate the communication properties of applications that use TCP as their mean of communication. Our approach is to analyze real TCP traffic passively; that is, we obtain packet traces from the network and, later, we evaluate this traffic. In order to facilitate the evaluation process, we represent TCP traffic in graphical format. The development of a network traffic tracing tool was necessary to conduct the evaluation. The tool uses already existing packet tracing...
International Journal of Communication Systems, 2007
TCP is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system to its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modeling and practical experiments; finally we propose and test one possible solution to resolve this performance bottleneck in Linux TCP.
2008
There is a growing interest in the use of variants of the Transmission Control Protocol (TCP) in high-speed networks. ns-2 has implementations of many of these high-speed TCP variants, as does Linux. ns-2, through an extension, permits the incorporation of Linux TCP code within ns-2 simulations. As these TCP variants become more widely used, users are concerned about how these different variants of TCP might interact in a real network environment -how fair are these protocol variants to each other (in their use of the available capacity) when sharing the same network. Typically, the answer to this question might be sought through simulation and/or by use of an experimental testbed. So, we compare with TCP NewReno the fairness of the congestion control algorithms for 5 high-speed TCP variants -BIC, Cubic, Scalable, High-Speed and Hamilton -on both ns-2 and on an experimental testbed running Linux. In both cases, we use the same TCP code from Linux. We observe some differences between the behaviour of these TCP variants when comparing the testbed results to the results from ns-2, but also note that there is generally good agreement.
work well over wired networks. A packet loss is occurred in a wired network mainly due to network congestion. On the other hand in a wireless link packet losses are caused mainly due to bit errors resulted from noise, interference, and various kinds of fadings. TCP performance in these environments is impacted by three path characteristics, not normally present in wired environments: high bandwidth delay product, packet losses due to corruption and bandwidth asymmetry. Wireless TCP has no idea whether a packet loss is caused by congestion or bit error. TCP assumes loss is caused by congestion and turns on its congestion control algorithms to slow down the amount of data it transmits as well as adopts retransmission policy. Invoking congestion control for bit errors in wireless channel reduces TCP throughput drastically. We propose an empirical architecture to recover these bit errors at Data Link Layer dynamically before entering the frame into buffer which reduces the number of ret...
The TCP protocol is used by the majority of the network applications on the Internet. TCP performance is strongly influenced by its congestion control algorithms that limit the amount of transmitted traffic based on the estimated network capacity and utilization. Because the freely available Linux operating system has gained popularity especially in the network servers, its TCP implementation affects many of the network interactions carried out today. We describe the fundamentals of the Linux TCP design, concentrating on the congestion control algorithms. The Linux TCP implementation supports SACK, TCP timestamps, Explicit Congestion Notification, and techniques to undo congestion window adjustments after incorrect congestion notifications.
2002
TCP Server is a system architecture aiming to of- fload network processing from the host(s) running an Internet server. The basic idea is to execute the TCP/IP processing on a dedicated processor, node, or device (the TCP server) using low-overhead, non-intrusive communication between it and the host(s) running the server application. In this paper, we propose, implement, and eval- uate
2007
In recent years, several new TCP congestion control algorithms have been proposed to improve TCP performance over very fast, long-distance networks. High bandwidth delay products require more aggressive window adaptation rules, yet maintaining the ability of controlling router buffer congestion. We define a relatively simple experimental scenario to compare most current high speed TCP proposals under many metrics: efficiency, internal fairness, friendliness to Reno, induced network stress, robustness to random losses. Based on the gained insight, we define Yet Another High-speed TCP, as a heuristic attempt to strike a balance among different opposite requirements.
Computer Science Department, Rutgers University, 2002
TCP Server is a system architecture aiming to offload network processing from the host (s) running an Internet server. The basic idea is to execute the TCP/IP processing on a dedicated processor, node, or device (the TCP server) using low-overhead, non-intrusive communication between it and the host (s) running the server application. In this paper, we propose, implement, and evaluate three TCP Server architectures:(1) a dedicated network processor on a symmetric multiprocessor (SMP) server,(2) a dedicated node on a cluster- ...
2004
The performance of popular Internet Web services is governed by a complex combination of server behavior, network characteristics and client workload -all interacting through the actions of the underlying transport control protocol (TCP). Consequently, even small changes to TCP or to the network infrastructure can have significant impact on end-to-end performance, yet at the same time it is challenging for service administrators to predict what that impact will be. In this paper we describe the implementation of a tool called Monkey that is designed to help address such questions. Monkey collects live TCP trace data near a server, distills key aspects of each connection (e.g., network delay, bottleneck bandwidth, server delays, etc.) and then is able to faithfully replay the client workload in a new setting. Using Monkey, one can easily evaluate the effects of different network implementations or protocol optimizations in a controlled fashion, without the limitations of synthetic workloads or the lack of reproducibility of live user traffic. Using realistic network traces from the Google search site, we show that Monkey is able to replay traces with a high degree of accuracy and can be used to predict the impact of changes to the TCP stack.
1994
Vegas is a new implementation of TCP that achieves between 40 and 70% better throughput, with one-fifth to onehalf the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study-using both simulations and measurements on the Internet-of the Vegas and Reno implementations of TCP. £ -kernel framework [3]. Our implementation of Reno was derived by retrofitting the BSD implementation into the £ -kernel. Our implementation of Vegas was derived by modifying Reno.
Networks have become multipath: mobile devices have multiple radio interfaces, datacenters have redundant paths andmultihoming is the normfor big server farms. Meanwhile, TCP is still only single-path. Is it possible to extend TCP to enable it to support multiple paths for current applications on today's Internet? The answer is positive. We carefully review the constraints--partly due to various types of middleboxes-- that influenced the design of Multipath TCP and show how we handled them to achieve its deployability goals. We report our experience in implementing Multipath TCP in the Linux kernel and we evaluate its performance. Our measurements focus on the algorithms needed to efficiently use paths with different characteristics, notably send and receive buffer tuning and segment reordering. We also compare the performance of our implementation with regular TCP on web servers. Finally, we discuss the lessons learned from designing MPTCP.
This dissertation proposes and evaluates a new approach for generating realistic traffic in networking experiments. The main problem solved by our approach is generating closed-loop traffic consistent with the behavior of the entire set of applications in modern traffic mixes. Unlike earlier approaches, which described individual applications in terms of the specific semantics of each application, we describe the source behavior driving each connection in a generic manner using the a-b-t model. This model provides an intuitive but detailed way of describing source behavior in terms of connection vectors that capture the sizes and ordering of application data units, the quiet times between them, and whether data exchange is sequential or concurrent. This is consistent with the view of traffic from TCP, which does not concern itself with application semantics. The a-b-t model also satisfies a crucial property: given a packet header trace collected from an arbitrary Internet link, we c...
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement, 2009
Since the last in-depth studies of measured TCP traffic some 6-8 years ago, the Internet has experienced significant changes, including the rapid deployment of backbone links with 1-2 orders of magnitude more capacity, the emergence of bandwidth-intensive streaming applications, and the massive penetration of new TCP variants. These and other changes beg the question whether the characteristics of measured TCP traffic in today's Internet reflect these changes or have largely remained the same. To answer this question, we collected and analyzed packet traces from a number of Internet backbone and access links, focused on the "heavy-hitter" flows responsible for the majority of traffic. Next we analyzed their within-flow packet dynamics, and observed the following features: (1) in one of our datasets, up to 15.8% of flows have an initial congestion window (ICW) size larger than the upper bound specified by RFC 3390. (2) Among flows that encounter retransmission rates of more than 10%, 5% of them exhibit irregular retransmission behavior where the sender does not slow down its sending rate during retransmissions. (3) TCP flow clocking (i.e., regular spacing between flights of packets) can be caused by both RTT and non-RTT factors such as application or link layer, and 60% of flows studied show no pronounced flow clocking. To arrive at these findings, we developed novel techniques for analyzing unidirectional TCP flows, including a technique for inferring ICW size, a method for detecting irregular retransmissions, and a new approach for accurately extracting flow clocks.
2017
TCP adaptation and retransmission strategies provide robust advantages by abstracting the development of applications on the Internet from the development of lower layers. However, the abstraction hides useful low-level information from researchers and administrators who could otherwise use the insights from the performance of TCP and lower layers to diagnose problems and improve TCP performance. Though common tools exist for manual analysis of TCP performance, some of these tools are outdated or arduous to easily use. The primary contribution of this thesis is Vessel, a tool to supplement existing tools with per-connection instrumentation, improving the ability to perform network analysis tests while providing sufficiently detailed information to identify differences with different tests. Vessel leverages the Extended Berkeley Packet Filter and Linux network namespaces. We demonstrate the utility of the tool by analyzing TCP flows associated with competing web-based speed measureme...
RFC, 2011
This memo describes a methodology for measuring sustained TCP throughput performance in an end-to-end managed network environment. This memo is intended to provide a practical approach to help users validate the TCP layer performance of a managed network, which should provide a better indication of end-user application level experience. In the methodology, various TCP and network parameters are identified that should be tested as part of the network verification at the TCP layer.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.