Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
11 pages
1 file
The original design of TCP failed to support reasonable performance over networks with large bandwidths and high round-trip times. Subsequent work on TCP has enabled the use of larger flow-control windows, yet the use of these options is still relatively rare, because manual tuning has been required. Other work has developed means for avoiding this manual tuning step, but those solutions lack generality and exhibit unfair characteristics. This paper introduces a new technique for TCP implementations to dynamically and automatically determine the best window size for optimum network performance. This technique results in greatly improved performance, a decrease in packet loss under bottleneck conditions, and greater control of buffer utilization by the end hosts. Previous research on buffer management can then be applied to these buffer allocation decisions.
IEEE Transactions on Computers, 2002
The paper proposes local and global optimization schemes for efficient TCP buffer allocation in an HTTP server. The proposed local optimization scheme dynamically adjusts the TCP send-buffer size to the connection and server characteristics. The global optimization scheme divides a certain amount of buffer space among all active TCP connections. These schemes are of increasing importance due to the large scale of TCP connection characteristics. The schemes are compared to the static allocation policy employed by a typical HTTP server, and shown to achieve considerable improvement to server performance and better utilization of its resources. The schemes require only minor code changes and only at the server.
2004
The existing TCP (Transmission Control Protocol) is known to be unsuitable for a network with the characteristics of high BDP (Bandwidth-Delay Product) because of the fixed small or large buffer size at the TCP sender and receiver. Thus, some trial cases of adjusting the buffer sizes automatically with respect to network condition have been proposed to improve the end-to-end TCP throughput. ATBT (Automatic TCP Buffer Tuning) attempts to assure the buffer size of TCP sender according to its current Congestion Window size (CWND), but the ATBT assumes that the buffer size of TCP receiver is maximum value that operating system defines. In DRS(Dynamic Right Sizing), by estimating the TCP arrival data of two times the amount TCP data received previously, the TCP receiver simply reserves the buffer size for the next arrival, accordingly. However, we do not need to reserve exactly two times of buffer size because of the possibility of TCP segment loss. We propose an efficient TCP buffer Tuning technique (called TBT-PLR : TCP Buffer Tuning technique based on packet loss ratio) since we adopt the ATBT mechanism and the TBT-PLR mechanism for the TCP sender and the TCP receiver, respectively. For the purpose of testing the actual TCP performance, we implemented our TBT-PLR by modifying the linux kernel version 2.4.18 and evaluated the TCP performance by comparing TBT-PLR with the TCP schemes of the fixed buffer size. As a result, more balanced usage among TCP connections was obtained.
2001
With the widespread arrival of bandwidth-intensive applications such as bulk-data transfer, multi-media web streaming and computational grids for high-performance computing, networking performance over the wide-area network has become a critical component in the infrastructure. Tragically, operating systems are still tuned for yesterday's WAN speeds and network applications. As a result, a painstaking process of manually tuning system buffers must be undertaken to make TCP flow-control scale to meet the needs of today's bandwidth-rich networks. Consequently, we propose an operating system technique called dynamic right-sizing that eliminates the need for this manual process. Previous work has also attacked this problem, but with less than complete solutions. Our solution is more efficient, more transparent, and applies to a wider set of applications, including those that require strict flow-control semantics because of performance disparities between the sender and receiver.
Journal of Computer Networks and Communications
In the absence of losses, TCP constantly increases the amount of data sent per instant of time. This behavior leads to problems that affect its performance, especially when multiple devices share the same gateway. Several studies have been done to mitigate such problems, but many of them require TCP side changes or a meticulous configuration. Some studies have shown promise, such as the use of gateway techniques to change the receiver’s advertised window of ACK segments based on the amount of memory in the gateway; in this work, we use the term “network-return” to refer to these techniques. In this paper, we present a new network-return technique called early window tailoring (EWT). For its use, it does not require any modification in the TCP implementations at the sides and does not require that all routers in the path use the same congestion control mechanism, and the use in the gateway is sufficient. With the use of the simulator ns-3 and following the recommendations of RFC 7928...
2007
Many techniques have been proposed in the last few years to address performance degradations in end-to-end congestion control. Although these techniques require parameter tuning to operate in different congestion scenarios, they miss the challenging target of both minimizing network delay and keeping goodput close to the network capacity. In this paper we propose a new mechanism, called Active Window Management (AWM), which addresses these targets by stabilizing the queue length in the network gateways. AWM acts on the Advertised Window parameter in the TCP segment carrying the acknowledge, but it does not affect the TCP protocol. The proposed technique is implemented in the network access gateways, that is, in the gateways through which both the incoming and outgoing packets related to a given TCP connection are forced to go, whatever the routing strategy used in the network. We show that when the access gateways implementing AWM are the bottleneck in the networks, TCP performance is very close to that of a pseudo constant bit rate protocol providing no loss, while network utilization is close to one.
Proceedings Third IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC 2000) (Cat. No. PR00607), 2000
There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission critical and time sensitive applications. This work explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of services are considered where TCP connections are assigned to these classes and mapped to two underlying queues with round robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.
Recently, TCP incast problem in data center networks has attracted a wide range of industrial and academic attention. Lots of attempts have been made to address this problem through experiments and simulations. This paper analyzes the TCP incast problem in data centers by focusing on the relationships between the TCP throughput and the congestion control window size of TCP. The root cause of the TCP incast problem is explored and the essence of the current methods to mitigate the TCP incast is well explained. The rationality of our analysis is verified by simulations. The analysis as well as the simulation results provides significant implications to the TCP incast problem. Based on these implications, an effective approach named IDTCP (Incast Decrease TCP) is proposed to mitigate the TCP incast problem. Analysis and simulation results verify that our approach effectively mitigates the TCP incast problem and noticeably improves the TCP throughput.
2002
Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications [8, 10], many researchers have proposed techniques to perform this tuning automatically [4, 7, 9, 11, 12, 14]. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations-the buffer autotuning already present in Linux 2.4.x and "Dynamic Right-Sizing." This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for different circumstances, and points toward ways to further improve performance.
work well over wired networks. A packet loss is occurred in a wired network mainly due to network congestion. On the other hand in a wireless link packet losses are caused mainly due to bit errors resulted from noise, interference, and various kinds of fadings. TCP performance in these environments is impacted by three path characteristics, not normally present in wired environments: high bandwidth delay product, packet losses due to corruption and bandwidth asymmetry. Wireless TCP has no idea whether a packet loss is caused by congestion or bit error. TCP assumes loss is caused by congestion and turns on its congestion control algorithms to slow down the amount of data it transmits as well as adopts retransmission policy. Invoking congestion control for bit errors in wireless channel reduces TCP throughput drastically. We propose an empirical architecture to recover these bit errors at Data Link Layer dynamically before entering the frame into buffer which reduces the number of ret...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Grid Computing, 2003
Journal of Network and Information Security, 2020
IEEE Transactions on Multimedia, 2000
2013 3rd IEEE International Conference on Computer, Control and Communication (IC4), 2013
Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications, 2006
Springer eBooks, 2005
10th IEEE Symposium on Computers and Communications (ISCC'05)
Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98CH36169)
International Journal of Software Engineering and Computer Systems, 2020
Wireless Personal Communications, 2008
IEEE International Conference on Communications, 2005. ICC 2005. 2005, 2005
2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, 2012