Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, IEEE Transactions on Computers
The paper proposes local and global optimization schemes for efficient TCP buffer allocation in an HTTP server. The proposed local optimization scheme dynamically adjusts the TCP send-buffer size to the connection and server characteristics. The global optimization scheme divides a certain amount of buffer space among all active TCP connections. These schemes are of increasing importance due to the large scale of TCP connection characteristics. The schemes are compared to the static allocation policy employed by a typical HTTP server, and shown to achieve considerable improvement to server performance and better utilization of its resources. The schemes require only minor code changes and only at the server.
The original design of TCP failed to support reasonable performance over networks with large bandwidths and high round-trip times. Subsequent work on TCP has enabled the use of larger flow-control windows, yet the use of these options is still relatively rare, because manual tuning has been required. Other work has developed means for avoiding this manual tuning step, but those solutions lack generality and exhibit unfair characteristics. This paper introduces a new technique for TCP implementations to dynamically and automatically determine the best window size for optimum network performance. This technique results in greatly improved performance, a decrease in packet loss under bottleneck conditions, and greater control of buffer utilization by the end hosts. Previous research on buffer management can then be applied to these buffer allocation decisions.
2004
The existing TCP (Transmission Control Protocol) is known to be unsuitable for a network with the characteristics of high BDP (Bandwidth-Delay Product) because of the fixed small or large buffer size at the TCP sender and receiver. Thus, some trial cases of adjusting the buffer sizes automatically with respect to network condition have been proposed to improve the end-to-end TCP throughput. ATBT (Automatic TCP Buffer Tuning) attempts to assure the buffer size of TCP sender according to its current Congestion Window size (CWND), but the ATBT assumes that the buffer size of TCP receiver is maximum value that operating system defines. In DRS(Dynamic Right Sizing), by estimating the TCP arrival data of two times the amount TCP data received previously, the TCP receiver simply reserves the buffer size for the next arrival, accordingly. However, we do not need to reserve exactly two times of buffer size because of the possibility of TCP segment loss. We propose an efficient TCP buffer Tuning technique (called TBT-PLR : TCP Buffer Tuning technique based on packet loss ratio) since we adopt the ATBT mechanism and the TBT-PLR mechanism for the TCP sender and the TCP receiver, respectively. For the purpose of testing the actual TCP performance, we implemented our TBT-PLR by modifying the linux kernel version 2.4.18 and evaluated the TCP performance by comparing TBT-PLR with the TCP schemes of the fixed buffer size. As a result, more balanced usage among TCP connections was obtained.
Springer eBooks, 2005
GridFTP is a high-performance, secure and reliable parallel data transfer protocol, used for transferring widely distributed data. Currently it allows users to configure the number of parallel streams and socket buffer size. However, the tuning procedure for its optimal combinations is a time consuming task. The socket handlers and buffers are important system resources and must therefore be carefully managed. In this paper, an efficient resource management scheme which predicts optimal combinations based on a simple regression equation is proposed. In addition, the equation is verified by comparing measured and predicted values and we apply the equation to an actual experiment on the KOREN. The result demonstrates that the equation predicts excellently with only 8% error boundary. This approach eliminates the time wasted tuning procedure. These results can be utilized directly and widely for the fast decision in typical applications such as GridFTP.
2003
TCP Server is a system architecture aiming to offload network processing from the host(s) running an Internet server. The basic idea is to execute the TCP/IP processing on a dedicated processor, node, or device (the TCP server) using low-overhead, non-intrusive communication between it and the host(s) running the server application. In this paper, we propose, implement, and evaluate the TCP Server architecture to offload TCP/IP processing in two different scenarios (1) using dedicated network processors on a symmetric multiprocessor (SMP) server and (2) using dedicated nodes on a cluster-based server built around a memory-mapped communication interconnect such as VIA. Based on our experience and results, we draw several conclusions: (i) offloading TCP/IP processing is beneficial to overall system performance when the server is overloaded (performance gains of upto 30% were achieved in the scenarios we studied) (ii) TCP servers demand substantial computing resources for complete offloading. Complete TCP/IP offloading to intelligent devices requires the device to be computationally powerful to outperform traditional architectures. (iii) the type of workload plays a significant role in the efficiency of TCP servers. Depending on the application workload, either the host processor or the TCP Server can become the bottleneck. Hence, a scheme to balance the load between the host and the TCP Server would be beneficial to server performance .
International Journal of Advanced Computer Science and Applications, 2018
The development of applications, such as online video streaming, collaborative writing, VoIP, text and video messengers is increasing. The number of such TCP-based applications is increasing due to the increasing availability of the Internet. The TCP protocol, works at the 4th layer of the Internet model and provides many services such as congestion control, reliable communication, error detection and correction. Many new protocols have been proposed such as stream control transmission protocol (SCTP) with more features compared to the TCP. However, due to the wide deployment, TCP is still the most widely used. TCP creates the segments and transmit to the receiver. In order to prevent the errors TCP saves the segments into the sender buffer. Similarly, the data is also saved at the receiver buffer before its transmission to the application layer. The selection of TCP sender and receiver buffer may be varied. It is very important because many applications work on the smartphones that are equipped with a small amount of memory. In many applications such as online video streaming, some errors are possible and it is not necessary to retransmit the data. In such case, a small buffer is useful. However, on text transmission the complete reassembly of message is required by the TCP before transmission to the application layer. In such case, the large buffer size is useful that also minimizes the buffer blocking problem of TCP. This paper provides a detailed study on the impact of TCP buffer size on smart-phone applications. A simple scenario is proposed in NS2 simulator for the experimentation.
2002
TCP Server is a system architecture aiming to of- fload network processing from the host(s) running an Internet server. The basic idea is to execute the TCP/IP processing on a dedicated processor, node, or device (the TCP server) using low-overhead, non-intrusive communication between it and the host(s) running the server application. In this paper, we propose, implement, and eval- uate
Proceedings Third IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC 2000) (Cat. No. PR00607), 2000
There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission critical and time sensitive applications. This work explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of services are considered where TCP connections are assigned to these classes and mapped to two underlying queues with round robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.
10th IEEE Symposium on Computers and Communications (ISCC'05)
A packet buffer for a protocol processor is a large shared memory space that holds incoming data packets in a computer network. This paper investigates four packet buffer management algorithms for a protocol processor including Dynamic Algorithm with Different Thresholds (DADT), which is proposed to reduce the packet loss ratio efficiently. The proposed algorithm takes the advantage of different packet sizes for each application by allocating buffer space for each queue proportionally. According to our simulation results, the DADT algorithm works well in reducing packet loss ratio compared to other three algorithms.
2001
With the widespread arrival of bandwidth-intensive applications such as bulk-data transfer, multi-media web streaming and computational grids for high-performance computing, networking performance over the wide-area network has become a critical component in the infrastructure. Tragically, operating systems are still tuned for yesterday's WAN speeds and network applications. As a result, a painstaking process of manually tuning system buffers must be undertaken to make TCP flow-control scale to meet the needs of today's bandwidth-rich networks. Consequently, we propose an operating system technique called dynamic right-sizing that eliminates the need for this manual process. Previous work has also attacked this problem, but with less than complete solutions. Our solution is more efficient, more transparent, and applies to a wider set of applications, including those that require strict flow-control semantics because of performance disparities between the sender and receiver.
1998
The rapid growth of the World Wide Web in recent years has caused a significant shift in the composition of Internet traffic. Although past work has studied the behavior of TCP dynamics in the context of bulk-transfer applications and some studies have begun to investigate the interactions of TCP and HTTP, few have used extensive realworld traffic traces to examine the problem. This interaction is interesting because of the way in which current Web browsers use TCP connections: multiple concurrent short connections from a single host.
Journal of Computer Networks and Communications
In the absence of losses, TCP constantly increases the amount of data sent per instant of time. This behavior leads to problems that affect its performance, especially when multiple devices share the same gateway. Several studies have been done to mitigate such problems, but many of them require TCP side changes or a meticulous configuration. Some studies have shown promise, such as the use of gateway techniques to change the receiver’s advertised window of ACK segments based on the amount of memory in the gateway; in this work, we use the term “network-return” to refer to these techniques. In this paper, we present a new network-return technique called early window tailoring (EWT). For its use, it does not require any modification in the TCP implementations at the sides and does not require that all routers in the path use the same congestion control mechanism, and the use in the gateway is sufficient. With the use of the simulator ns-3 and following the recommendations of RFC 7928...
Journal of Grid Computing, 2003
It is often claimed that TCP is not a suitable transport protocol for data intensive Grid applications in high-performance networks. We argue that this is not necessarily the case. Without changing the TCP protocol, congestion control, or implementation, we show that an appropriately tuned TCP bulk transfer can saturate the available bandwidth of a network path. The proposed technique, called SOBAS, is based on automatic socket buffer sizing at the application layer. In non-congested paths, SOBAS limits the socket buffer size based on direct measurements of the received throughput and of the corresponding round-trip time. The key idea is that the send window should be limited, after the transfer has saturated the available bandwidth in the path, so that the transfer does not cause buffer overflows (‘self-induced losses’). A difference with other socket buffer sizing schemes is that SOBAS does not require prior knowledge of the path characteristics, and it can be performed while the transfer is in progress. Experimental results in several high bandwidth-delay product paths show that SOBAS provides consistently a significant throughput increase (20% to 80%) compared to TCP transfers that use the maximum possible socket buffer size. We expect that SOBAS will be mostly useful for applications such as GridFTP in non-congested wide-area networks.
Computer Science Department, Rutgers University, 2002
TCP Server is a system architecture aiming to offload network processing from the host (s) running an Internet server. The basic idea is to execute the TCP/IP processing on a dedicated processor, node, or device (the TCP server) using low-overhead, non-intrusive communication between it and the host (s) running the server application. In this paper, we propose, implement, and evaluate three TCP Server architectures:(1) a dedicated network processor on a symmetric multiprocessor (SMP) server,(2) a dedicated node on a cluster- ...
2002
Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications [8, 10], many researchers have proposed techniques to perform this tuning automatically [4, 7, 9, 11, 12, 14]. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations-the buffer autotuning already present in Linux 2.4.x and "Dynamic Right-Sizing." This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for different circumstances, and points toward ways to further improve performance.
2001
This paper proposes a method for improving the performance of web servers servicing static HTTP requests. The idea is to give preference to those requests which are short, or have small remaining processing requirements, in accordance with the SRPT (Shortest Remaining Processing Time) scheduling policy.
Proceedings of the 2012 ACM workshop on Capacity sharing - CSWS '12, 2012
Today, content replication methods are common ways of reducing the network and servers load. Present content replication solutions have different problems, including the need for pre-planning and management, and they are ineffective in case of sudden traffic spikes. In spite of these problems, content replication methods are more popular today than ever, simply because of an increasing need for load reduction. In this paper, we propose a shared buffering model that, unlike current proxy-based content replication methods, is native to the network and can be used to alleviate the stress of sudden traffic spikes on servers and the network. We outline the characteristics of a new transport protocol that uses the shared buffers to offload the server work to the network or reduce the pressure on the overloaded links.
2002
The growth of the Internet and the deployment of new Web-based services have placed a high emphasis on system performance. Providers are interested in low-cost architectures and techniques to reduce the response times perceived by users during HTTP transactions. A widespread approach is based on the use of distributed Web server architectures based on clusters of servers driven by dedicated machines that take charge of dispatching the HTTP requests. An increasingly popular mechanism to carry out dispatching is based on the analysis of request content. This is typically realized at the application level, since an implementation at the transport level may result difficult due to the connection oriented nature of TCP. This paper proposes a modification to TCP in order to support content-aware scheduling in the kernel of an OS, and analyzes the advantages of such a scheduling approach by implementing a name-based algorithm and comparing its performance with the ones delivered by content-blind scheduling algorithms.
2000
We discuss the performance effects of using per-transaction TCP connections for HTTP access, and the proposed optimizations of avoiding per-transaction re-connection and TCP slow-start restart overheads. We analyzed the performance penalties of the interaction of HTTP and TCP. Our observations indicate that the proposed optimizations do not affect Web access for the vast majority of users. Most users see end-to-end
2012
The bufferbloat is a serious problem of the modern nets with complex organization. The persistent saturation of the net with an excessive buffering on hops results in a huge latency which can make interactive and realtime connections practically unusable. Most research in this area is devoted to Active Queue Management (AQM) which is used to keep the packet queue size correct on the bottleneck of the network path. This article gives the other look on the bufferbloat. It suggests to fight with the congestion-the reason of buffer saturation. It states that currently most widely used TCP loss-based congestion control algorithms have the problem which is shown in impossibility to correctly estimate the bandwidth in the presence of excessive buffering, and suggests to use the delay-based approach for congestion avoidance. It presents ARTCP which is delay-based and does not use burst sends limited by the congestion window, but transmits data on the adaptive rate estimated with pacing intervals.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.