Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, International Conference on Internet Computing
Network performance degradation due to congestion is a major problem in the Internet. This performance degradation is mainly due to the uncoordinated interaction of congestion control mechanism of the TCP and that of the underlying network. Another challenging problem is that TCP cannot achieve fair bandwidth allocation among competing TCP traffics. In this paper, we propose a Fair Intelligent Congestion Control Resource Discovery (FICCRD) mechanism that can improve end-to-end TCP performance by controlling the congestion and allocating fair share of bandwidth among competing TCP traffICS.The key ideas of FICCRD are to integrate available network resources in estimating connections' fair share of network resource; to create feedbadc control loops between edge routers; to introduce a protocol whereby a special Resource Discovery (RD) packet is employed to coUectand convey en route router state information: and to employ inteUigent algorithms to match a connection's TCP sending rate to the rate at which the underlying network can support Simulation results show that the mechanism can signi[lCQntlyimprove in throughput, fairness, and packet loss rate for TCP connections. More importantly, the mechanism is transparent to TCP and requires no modifications to current TCP implementations.
Converged Networking, 2003
Today's Internet only provides best-effort service for all traffics. The network is not able to guarantee the quality of service required by an application that demands more stringent response in terms of delay, jitters, bandwidth and etc. It is well accepted that, the deployment of QoS-aware technologies is a key factor for the continued success of the Internet. In this paper, we propose a Fair Intelligent Congestion Control Resource Discovery (FICCRD) protocol on TCP based network whereby a mechanism is employed at core routers to determine available network resources and convey this information to edge routers. At the edge routers, an intelligent control algorithm is employed to assist the TCP to maximize its traffic over the underlying network. The key ideas are to integrate available network resources in estimating connections' fair share; to create feedback control loops between edge routers; to employ a special Resource Discovery (RD) packet to collect and convey en route router state information; and to employ intelligent algorithms to match a TCP connection's sending rate to the rate at which the underlying network can support. We demonstrate that FICCRD protocol is effective, fair, flexible and can be easily extended for QoS control of the future Internet.
asiafi.net
The Transport Control Protocol (TCP) has contributed to the tremendous success of the Internet but it also includes many problems which are becoming more and more significant as the network grows. Although numerous congestion algorithms have been proposed to improve the performance of TCP in heterogeneous networks, designing a congestion algorithm that could achieve high utilization, ensure fairness and maintain stability remains a great challenge. In this paper, we propose a novel congestion control algorithm named FTCP to tackle these challenges. FTCP is trying to solve TCP's challenges by adjusting its initial congestion window (cwnd) and cwnd updating rate with the aids of measuring the bottleneck queue length. To evaluate the performance of our algorithm, extensive experiments have been performed using network simulation tool. Experimental results prove that FTCP has obvious advantages in efficiency, fairness, TCP friendliness and stability comparing to the existing stateof-the-art congestion control algorithms.
Computer Networks, 2010
Our study is motivated by the need to enable quality of service (QoS), congestion control and fair rate allocation for all end applications. We propose a new approach to address these needs which is different from the current practice whereby end applications pursue their own rate control using TCP. Our approach comprises a network rate management protocol (RMP) that controls the rate of all flows (at an aggregate level based on routes) subject to QoS requirements. The RMP control also facilitates a new TCP sliding-window congestion control based on the fair target rates computed by the RMP. Each non-TCP aggregate flow is policed by its respective edge router and each TCP flow adapts its window size as to achieve the RMP suggested fair target rate. The stability analysis of the new TCP congestion control is performed in a linearly scalable framework, which is less restrictive than a fluid model. We show that our proposed control is linearly scalable and establish its global asymptotic stability under arbitrary and variable information time lags, aka totally asynchronous conditions. The stability and the vitality of our control is verified by two means. One is a simulation of a network comprising 74 core links and up to 768 flows, each using its own access link. The simulation is also used to compare our control with the congestion control algorithms used in Fast, Vegas and Reno TCPs. The second verification means is an actual implementation of the control in the Linux kernel and its experimentation in a WAN testbed network comprising six routers and long haul links running UDP flows as well as CUBIC, N-RENO and C-TCP flows. Our experiments demonstrate that our approach can guarantee fair rates for all flows and QoS to premium flows.
Internet users always seek service prioritization. This service can be defined as " Give importanceto important network traffic over unimportant network traffic " .Conventional methods can be categorization of traffic by considering the existing traffic as " best-effort " class can be named as low-priority (LP) class, and keen to develop mechanisms which will give " better-than-best-effort " service. It is worth mentioning thatthis paper is going to mention the ides of developing a Low Priority distributed algorithm whose objective is to utilize only the rest bandwidth and give priority to other delay sensitive traffic generated by interactive applications or media streaming application and interested in devising a efficient approach or algorithm called novel distributed algorithm to implement a LP service which work as against existing best effort traffic service from the communication endpoints.
We present performance studies of TCP Westwood (TCPW), a sender-side modification of the congestion window control scheme in TCP. TCP Westwood relies on end-to-end rate estimation. The key innovative idea is to continuously measure at the TCP sender the packet rate of the connection by monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or a timeout. The rationale of this strategy is simple: in contrast with TCP Reno, which "blindly" halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective connection rate at the time congestion is experienced. The proposed faster recovery mechanism is particularly effective over wireless links where sporadic losses due to radio link problems are often misinterpreted as a symptom of congestion by current TCP schemes, and thus lead to unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was confirmed in a set of experiments. TCP Reno connections are not starved by TCPW connections; on the contrary, the TCP Reno connections continue to make satisfactory progress. TCPW is shown here to be extremely effective in mixed wired and wireless networks and in high speed networks. Throughput improvements of up to 615 % are observed. Internet measurements using a Linux TCPW implementation are also reported in this paper, providing further evidence of the gains achievable via TCPW.
2006
This paper describes and evaluates a new schema to reduce the network congestion and the unfair bandwidth allocation for competing flows. These two problems can be caused by unresponsive flows or not TCP-friendly flows. Unresponsive flows don't have their rate transmission regulated by the network congestion, like an UDP flow. A not TCP-friendly flow is, for example, a TCP flow with small roundtrip time (RTT). These flows can consume a large part of the network bandwidth, resulting in a degradation of the network performance. This new schema uses the TCP behavior to estimate the packet drop probability of each flow, decreasing the problem of unfair share of network bandwidth among TCP flows with different RTTs. As for UDP flows, which consume a large bandwidth, the algorithm will drop packets before entering to the network. One of the advantages of this new schema is to decrease the number of drop packets inside the network. Another advantage is to fairly regulate the bandwidth share among the flows. And the last advantage is to improve the link utilization.
As the Internet is expected to better support many applications such as multimedia with limit bandwidth, new mechanisms are needed to control the congestion in the network. Congestion control plays the key role to ensure stability of the Internet along with fair and efficient allocation of the bandwidth. So, congestion control is currently a large area of research and concern in the network community. Many congestion control mechanisms are developed and refined by researcher aiming to overcome congestion. During the last decade, several congestion control mechanisms have been proposed to improve TCP congestion control.
Indonesian Journal of Electrical Engineering and Computer Science, 2016
Transmission Control Protocol (TCP) is used by many applications on the Internet for the reliable data transmission. TCP does not able to utilize the available link bandwidth quickly and efficiently in High bandwidth short distance (HBSD) and high bandwidth long distance (HBLD) networks. Many congestion control techniques also known as TCP variants are developed to solve these problems in different network environments. In this paper an experimental analysis is done for the performance evaluation of TCP CUBIC, TCP Compound, TCP Reno and High speed TCP in term of Inter and Intra Protocol fairness by using Network Simulator 2 (NS-2). Results show that the performance of TCP CUBIC pathetically down and TCP Compound and TCP Reno shows good performance in term of protocol fairness. However, these congestion control techniques still need more improvement for the utilization of available link bandwidth in HBLD networks and other network resources.
2001
The standard end-to-end flow control implemented by the TCP protocol is ill-suited when it comes to achieving fair bandwidth allocation among competing TCP flows. Indeed, the lack of feedback from intermediate nodes does not allow a TCP source to regulate its throughput so that it is not sending more than its fair share, thus penalizing other, less aggressive flows. In this paper, we propose a novel scheme that, building on existing work on network-layer stateless fair queueing, extends the approach to the TCP layer. Also, we discuss a possible implementation of both the network-layer and the transport-layer architecture. We test our solution under different traffic scenarios and show that not only is a fair bandwidth allocation achieved, but the overall network utilization for TCP flows is also increased.
TCP or Transmission Control Protocol represents one of the prevailing ''languages'' of the Internet Protocol Suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliability to data transferring in all end-to-end data stream services on the internet. This protocol is utilized by major internet applications such as the e-mail, file transfer, remote administration and world-wide-web. Other applications which do not require reliable data stream service may use the User Datagram Protocol (UDP), which provides a datagram service that emphasizes reduced latency over reliability. The task of determining the available bandwidth of TCP packets flow is in fact, very tedious and complicated. The complexity arises due to the effects of congestion control of both the network dynamics and TCP. Congestion control is an approved mechanism used to detect the optimum bandwidth in which the packets are to be sent by TCP sender. The understanding of TCP behaviour and the approaches used to enhance the performance of TCP in fact, still remain a major challenge. In conjunction to this, a considerable amount of researches has been made, in view of developing a good mechanism to raise the efficiency of TCP performance. The article analyses and investigates the congestion control technique applied by TCP, and indicates the main parameters and requirements required to design and develop a new congestion control mechanism.
2012
The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP—namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments(wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but...
IEEE/ACM Transactions on Networking, 2005
We consider a modification of TCP congestion control in which the congestion window is adapted to explicit bottleneck rate feedback; we call this RATCP (Rate Adaptive TCP). Our goal in this paper is to study and compare the performance of RATCP and TCP in various network scenarios with a view to understanding the possibilities and limits of providing better feedback to TCP than just implicit feedback via packet loss. To understand the dynamics of rate feedback and window control, we develop and analyze a model for a long-lived RATCP (and TCP) session that gets a time-varying rate on a bottleneck link. We also conduct experiments on a Linux based test-bed to study issues such as fairness, random losses, and randomly arriving short file transfers. We find that the analysis matches well with the results from the test-bed. For large file transfers, under low background load, ideal fair rate feedback improves the performance of TCP by 15%-20%. For small randomly arriving file transfers, though RATCP performs only slightly better than TCP it reduces losses and variability of throughputs across sessions. RATCP distinguishes between congestion and corruption losses, and ensures fairness for sessions with different round trip times sharing the bottleneck link. We believe that rate feedback mechanisms can be implemented using distributed flow control and recently proposed REM in which case, ECN bit itself can be used to provide the rate feedback.
2005
Under the TCP congestion control regime, heterogeneous flows, i.e., flows with different round-trip times (RTTs), that share the same bottleneck link will not attain equal portions of the available bandwidth. In fact, according to the TCP friendly formula [1], the throughput ratio of two flows is inversely proportional to the ratio of their RTTs. It has also been shown that TCP's unfairness to flows with longer RTTs is accentuated under loss synchronization. Well-known mechanisms to avoid synchronization are based on injecting randomness into the network, e.g., introducing background traffic, using random drop (as opposed to drop-tail queuing). In this paper, we show that, in high-speed networks, injecting bursty background traffic may actually lead to synchronization and result in unfairness to foreground TCP flows with longer RTTs. We observe that unfairness is especially severe in high-speed variants of TCP such as Scalable TCP (S-TCP) and HighSpeed TCP (HSTCP). We propose three different metrics to characterize traffic burstiness and show that these metrics are reliable predictors of TCP unfairness. Finally, we show that TCP unfairness (including TCP SACK, S-TCP, and HSTCP) in highspeed networks due to bursty background traffic can be mitigated through the use of random drop queuing disciplines (such as RED) at bottleneck routers.
Abstract In this short paper, we make the case for a different view to look at TCP-friendly traffic controls. Instead of forcing each flow to converge to a TCP fair rate, we propose letting inelastic and elastic flows each implement its natural traffic control and co-exist. We describe how we show this is better than the current TCP-friendly approach. We then point out a very challenging research problem in network resource allocation in the current Internet. Many new applications use multiple connections concurrently.
ACM SIGMETRICS Performance Evaluation Review, 2012
In this paper we address the problem of fast and fair transmission of flows in a router, which is a fundamental issue in networks like the Internet. We model the interaction between a source using the Transmission Control Protocol (TCP) and a bottleneck router with the objective of designing optimal packet admission controls in the router queue. We focus on the relaxed version of the problem obtained by relaxing the fixed buffer capacity constraint that must be satisfied at all time epoch. The relaxation allows us to reduce the multi-flow problem into a family of single-flow problems, for which we can analyze both theoretically and numerically the existence of optimal control policies of special structure. In particular, we show that for a variety of parameters, TCP flows can be optimally controlled in routers by so-called index policies, but not always by threshold policies. We have also implemented the index policy in Network Simulator-3 and tested in a simple topology their applicability in real networks. The simulation results show that the index policy achieves a wide range of desirable properties with respect to fairness between different TCP versions, across users with different round-trip-time and minimum buffer required to achieve full utility of the queue.
The demand for fast transfer of large volumes of data, and the deployment of the network infrastructures is ever increasing. However, the dominant transport protocol of today, TCP, does not meet this demand because it favors reliability over timeliness and fails to fully utilize the network capacity due to limitations of its conservative congestion control algorithm. The slow response of TCP in fast long distance networks leaves sizeable unused bandwidth in such networks. A large variety of TCP variants have been proposed to improve the connection's throughput by adopting more aggressive congestion control algorithms. Some of the flavors of TCP congestion control are loss-based, high-speed TCP congestion control algorithms that uses packet losses as an indication of congestion; delay-based TCP congestion control that emphasizes packet delay rather than packet loss as a signal to determine the rate at which to send packets. Some efforts combine the features of loss-based and delay-based algorithms to achieve fair bandwidth allocation and fairness among flows. A comparative analysis between different flavors of TCP congestion control namely Standard TCP congestion control (TCP Reno), loss-based TCP congestion control (HighSpeed TCP, Scalable TCP, CUBIC TCP), delay-based TCP congestion control (TCP Vegas) and mixed loss-delay based TCP congestion control (Compound TCP) is presented here in terns of congestion window verses elapsed time after the connection is established.
Computer Networks, 2013
In this paper we address the problem of fast and fair transmission of flows in a router, which is a fundamental issue in networks like the Internet. We model the interaction between a TCP source and a bottleneck queue with the objective of designing optimal packet admission controls in the bottleneck queue. We focus on the relaxed version of the problem obtained by relaxing the fixed buffer capacity constraint that must be satisfied at all time epoch. The relaxation allows us to reduce the multi-flow problem into a family of single-flow problems, for which we can analyze both theoretically and numerically the existence of optimal control policies of special structure. In particular, we show that for a variety of parameters, TCP flows can be optimally controlled in routers by so-called index policies, but not always by threshold policies. We have also implemented index policies in Network Simulator-3 and tested in a simple topology their applicability in real networks. The simulation results show that the index policy covers a big range of desirable properties with respect to fairness between different versions of TCP models, across users with different round-trip-time and minimum buffer required to achieve full utility of the queue.
International Journal of Communication Networks and Distributed Systems, 2016
In order to curtail the escalating packet loss rates caused by an exponential increase in network traffic, active queue management techniques such as Random Early Detection (RED) have come into picture. Flow Random Early Drop (FRED) keeps state based on instantaneous queue occupancy of a given flow. FRED protects fragile flows by deterministically accepting flows from low bandwidth connections and fixes several shortcomings of RED by computing queue length during both arrival and departure of the packet. Stochastic Fair Queuing (SFQ) ensures fair access to network resources and prevents a busty flow from consuming more than its fair share. In case of (Random Exponential Marking) REM, the key idea is to decouple congestion measure from performance measure (loss, queue length or delay). Stabilized RED (SRED) is another approach of detecting nonresponsive flows. In this paper, we have shown a comparative analysis of throughput, delay and queue length for the various congestion control algorithms RED, SFQ and REM. We also included the comparative analysis of loss rate having different bandwidth for these algorithms.
Lecture Notes in Computer Science, 2002
TCP Westwood (TCPW) is a sender-side only modification of TCP Reno congestion control, which exploits end-to-end bandwidth estimation to properly set the values of slow-start threshold and congestion window after a congestion episode. This paper aims at showing via both mathematical modeling and extensive simulations that TCPW significantly improves fair sharing of high-speed networks capacity and that TCPW is friendly to TCP Reno. Moreover, we propose EASY RED, which is a simple Active Queue management (AQM) scheme that improves fair sharing of network capacity especially over high-speed networks. Simulation results show that TCP Westwood provides a remarkable Jain's fairness index increment up to 200% with respect to TCP Reno and confirm that TCPW is friendly to TCP Reno. Finally, simulations show that Easy RED improves fairness of Reno connections more than RED, whereas the improvement in the case of Westwood connections is much smaller since Westwood already exhibits a fairer behavior by itself.
Journal of Engineering and Applied Sciences, 2021
Congestion control in networks is prioritized for maintaining high and error free data rates. The work in this paper directly investigates possible attainment of optimized TCP congestion control over wired networks through a proposed modification done to one of the existing TCP protocols. Firstly, the proposed approach looks into various queue limits e.g. Drop Tail and RED. Evidently, a comparison of these two queue limits vividly demonstrates that RED demonstrates a better performance when it comes to attainment of optimized TCP congestion control. This investigation is naturally followed by simulating different TCP versions such as TCP Tahoe, TCP Sack, TCP NewReno, TCP Reno and TCP Westwood for checking and comparing their effective resource utilization such as network Throughputs, comparative bandwidths, retransmission rates and window sizes. Simulations in this regard were carried through using NS-2 (Network Simulator-2) simulation software. From simulation results, as evident in this paper, TCP Westwood was found to be the best candidate for innovation out of the mentioned flavors based on its output performance. The most active objective of the proposed research presented in this paper was to modify the already chosen TCP Westwood protocol and to come up with a new variant of it proposed here namely as "TCP New-Westwood" for providing a higher degree of TCP congestion control.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.