Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, IEEE Transactions on Communications
…
11 pages
1 file
Quantized Congestion Notification (QCN) has been developed for IEEE 802.1Qau to provide congestion control at the Ethernet Layer or Layer 2 in data center networks (DCNs) by the IEEE Data Center Bridging Task Group. One drawback of QCN is the rate unfairness of different flows when sharing one bottleneck link. In this paper, we propose an enhanced QCN congestion notification algorithm, called fair QCN (FQCN), to improve rate allocation fairness of multiple flows sharing one bottleneck link in DCNs. FQCN identifies congestion culprits through joint queue and per flow monitoring, feedbacks individual congestion information to each culprit through multicasting, and ensures convergence to statistical fairness. We analyze the stability and fairness of FQCN via Lyapunov functions and evaluate the performance of FQCN through simulations in terms of the queue length stability, link throughput and rate allocations to traffic flows with different traffic dynamics under three network topologies. Simulation results confirm the rate allocation unfairness of QCN, and validate that FQCN maintains the queue length stability, successfully allocates the fair share rate to each traffic source sharing the link capacity, and enhances TCP throughput performance in the TCP Incast setting.
—With rapid growth in Data Center Network(DCN), many problems are observed by the researchers. These problems are related to its architecture, congestion control and TCP throughput. Nowadays, there is vast improvement in the Data Center Network with respect to Congestion Notification. In todays scenario Data Center Ethernet (DCE) is mostly used as compared to other technologies. Ethernet is low in cost and primary protocol used in DCN. But this Ethernet technology may also lead to packet drop and low bandwidth utilization, consequently lead to congestion. To solve this problem various Ethernet protocols are being developed to prevent congestion at the switch such as Backward Congestion Control (BCN), Enhanced Forward Explicit Congestion Notification (E-FECN), Forward Explicit Congestion Notification (FECN), Quantized Congestion Notification (QCN), Approximate Fair Quantized Congestion Notification (AF-QCN) and Fair Quantized Congestion Notification (FQCN). Among those algorithms QCN is accepted as the formal standard for congestion control. QCN achieve proportional fairness and hence not lead to the exact result in conditions where only one bottleneck link is shared among a number of flaws. Moreover, in this paper, we are going to analyze the performance of QCN in terms of stability and then we will go to improve its fairness to the Max-Min fairness.
2010 18th IEEE Symposium on High Performance Interconnects, 2010
Data Center Networks represent the convergence of computing and networking, of data and storage networks, and of packet transport mechanisms in Layers 2 and 3. Congestion control algorithms are a key component of data transport in this type of network. Recently, a Layer 2 congestion management algorithm, called QCN (Quantized Congestion Notification), has been adopted for the IEEE 802.1 Data Center Bridging standard: IEEE 802.1Qau. The QCN algorithm has been designed to be stable, responsive, and simple to implement. However, it does not provide weighted fairness, where the weights can be set by the operator on a per-flow or per-class basis. Such a feature can be very useful in multi-tenanted Cloud Computing and Data Center environments.
2010
This paper analyzes the performance of Ethernet layer congestion control mechanism Quantized Congestion Notification (QCN) during data access from clustered servers in data centers. We analyze the reasons why QCN does not perform adequately in these situations and propose several modifications to the protocol to improve its performance in these scenarios. We trace the causes of QCN performance degradation to flow rate variability, and show that adaptive sampling at the switch and adaptive self-increase of flow rates at the rate limiter improve performance in a TCP Incast setup significantly. We compare the performance of QCN against TCP modifications in a heterogeneous environment, and show that modifications to QCN yield better performance.
2016
The more demands for the lossless and low latency network in the modern datacenter appear because the proliferation of demanding applications. Some congestion control schemes such as CN, PFC, ETS which is introduced by IEEE 802.1 focus on the L2 network domain. While current TCP/IP stacks can't meet these requirement on L3 or above networks. This draft introduces the L3QCN(Layer 3 Quantized Congestion Notification), an end to end congestion control scheme which adopt QCN and DCQCN on L2 network. It specifies protocols, procedures, and managed objects to support congestion control on the datacenter network.
2008
Data Center Networks present a novel, unique and rich environment for algorithm development and deployment. Projects are underway in the IEEE 802.1 standards body, especially in the Data Center Bridging Task Group, to define new switched Ethernet functions for data center use. One such project is IEEE 802.1Qau, the Congestion Notification project, whose aim is to develop an Ethernet congestion control algorithm for hardware implementation. A major contribution of this paper is the description and analysis of the congestion control algorithm-QCN, for Quantized Congestion Notification-which has been developed for this purpose. A second contribution of the paper is an articulation of the Averaging Principle: a simple method for making congestion control loops stable in the face of increasing lags. This contrasts with two well-known methods of stabilizing control loops as lags increase; namely, (i) increasing the order of the system by sensing and feeding back higher-order derivatives of the state, and (ii) determining the lag and then choosing appropriate loop gains. Both methods have been applied in the congestion control literature to obtain stable algorithms for high bandwidth-delay product paths in the Internet. However, these methods are either undesirable or infeasible in the Ethernet context. The Averaging Principle provides a simple alternative, one which we are able to theoretically characterize.
IEEE/ACM Transactions on Networking, 2005
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. The focus is on developing decentralized control laws at end-systems and routers at the level of fluid-flow models, that can provably satisfy such properties in arbitrary networks, and subsequently approximate these features through practical packet-level implementations. Two families of control laws are developed. The first "dual" control law is able to achieve the first three objectives for arbitrary networks and delays, but is forced to constrain the resource allocation policy. We subsequently develop a "primal-dual" law that overcomes this limitation and allows sources to match their steady-state preferences at a slower timescale , provided a bound on round-triptimes is known. We develop two packet-level implementations of this protocol, using 1) ECN marking, and 2) queueing delay, as means of communicating the congestion measure from links to sources. We demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness, under a variety of scaling parameters.
IEEE Systems Journal, 2019
Data center networking brought a new era of data-intensive applications such as remote direct memory access, highperformance computing, and cloud computing, which raise new challenges for network researchers. Such applications require minimum network latency, no packet loss, and fairness between flows. Therefore, IEEE Data Center Bridging Task Group presents several enhancements for Ethernet networks to fulfill these requirements. In this context, we investigate the possibility of achieving dropless Ethernet. We extend our previously proposed congestion control protocol, named Heterogeneous Flow (HetFlow), to achieve minimum queue length and consequently minimum network latency. In addition, we present a mathematical model, stability analysis, and scalability study of the proposed protocol. Further, extensive simulation experiments are conducted to verify our mathematical analysis. Moreover, it is illustrated by simulations that HetFlow improves fairness between flows of different packet sizes and different round trip times.
Transactions on Emerging Telecommunications Technologies, 2016
The Quantized Congestion Notification (QCN) is a Layer 2 congestion control scheme for Carrier Ethernet data center networks. The QCN has been standardized as an IEEE 802.1Qau Ethernet Congestion Notification standard. This paper report a results of a QCN study with multicast traffic and proposes an enhancement to the QCN. In fact, in order to be able to scale up, the feedback implosion problem has to be solved. Therefore, we resorted to the representative technique, which uses a selected congestion point (i.e., the overloaded queue in a switch), to provide timely and accurate feedback on behalf of the congested switches in the path of multicast traffic. This paper evaluates the rate variation, the feedback overhead, the loss rate, the stability, the fairness, and the scalability performance of the standard QCN with multicast traffic and the enhanced QCN for multicast traffic. This paper also compares their performance criteria. The evaluation results show that the enhanced proposition of the QCN for multicast traffic gives better results than the standard QCN with multicast traffic. Indeed, the feedback implosion problem is settled by decreasing remarkably the feedback rate.
IET Networks, 2018
Data Center Ethernet (DCE) is a budding research area that has received considerable attention from the information and communications technology sector. The traditional DCEs especially IEEE 802.3 are considered unreliable despite being widely used in the local area network technology of modern day data centers. In Ethernet intermediate layer-2 switching devices, the outgoing traffic between the source and destination is faster than the incoming traffic and therefore results in packet drops. Ethernet reliability is provided by the upper layer protocols, which is prohibited by the initial concept of the network. As such, various congestion notification (CN) techniques for hop-by-hop based flow control have been proposed over the years for layer-2 devices to address the issue of silent packet drops. However, there is a dearth of comprehensive survey in this area; moreover, a simulation-based evaluation of IEEE standards that solely focus on CN techniques remains lacking. This study investigates CN techniques for layer-2 devices that employ a hop-by-hop based flow control. It also highlights the challenges confronting CN techniques in determining the optimal buffer threshold. In addition, the emerging Fiber Channel over Ethernet protocol and the relation of this protocol to CN are emphasized. A simulation-based evaluation of IEEE standards (IEEE 802.3x and IEEE 802.1Qbb) is performed on a hop-by-hop-based flow control with the traditional IEEE 802.3 Ethernet under different traffic loads to gauge the effect on network performance. The parameters, such as throughput, end-to-end delay, and buffer space utilization, are evaluated through a simulation-based comparison. 1 Introduction In the past two and a half decades, Ethernet has become the dominant communication technology adopted worldwide. As workplaces become increasingly automated and network based along with the rise of the Internet, Ethernet has also become increasingly ubiquitous. Nearly all organizations today employ Ethernet to link their workplace computers and to use the Internet. Ethernet is successful because this technology has a wide base and is cost efficient, simple, and mature. Although communication systems are moving toward wireless technologies, Ethernet is still the prominent technology and remains vital in communications. It is the backbone of current communication systems, in which nearly everything wired or wireless is connected to the Internet. The architectures of data centers in the present wireless and mobile age are still based on wired Ethernet. IEEE 802.3 Ethernet is traditionally regarded as an unreliable networking technology in most applications because it does not guarantee the delivery of packets to their destinations [1-4]. Reliable network architecture consists of an infrastructure that is responsible for the reliability provision of injected packets in the network. However, this flair in Ethernet is provided by upper layer protocols , which contradict basic networking concepts [4-6]. Congestion management in the current implementation of Ethernet is performed through upper layer protocols. The transmission control protocol (TCP) is one of the most dominant transport layer protocols for providing reliability in Ethernet [4, 7]. A network switch simply drops packets when it is congested, and the TCP infers the occurrence of network congestion by detecting the packet drops. The path between the source and destination in Ethernet is usually comprised of multiple hops, and the insufficient feedback on buffer occupancy and forwarding data among the intermediate hops creates a communication gap that eventually leads to unreliable communication [3-5]. Considering layer-2 switches at each hop as the sender and receiver, the processing speed of the receiver is always lower than the forwarding speed of the sender; thus, the receiver stores the incoming packets in the buffer [3-5]. When the receiver buffer can no longer absorb the incoming traffic, the receiver is forced to silently drop the packets that cannot be accommodated in the buffer. These silent packet drops render Ethernet unreliable because the sender is unaware of such occurrences. However, packet drops are acceptable as long as upper layer protocols handle and retransmit the drops. For example, applications, such as the file transfer protocol (FTP), use TCP to handle and retransmit packet drops. However, this technique can cause issues in applications in which reliability cannot be incorporated in the upper layers, as in the case of real-time applications (audio-video conferencing and online gaming) that use User Datagram Protocol or Internetwork Packet Exchange. Hence, a hop-by-hop based flow control mechanism is required to provide reliability to such applications. This mechanism not only provides the reliability required by real-time applications on a hop-by-hop basis but also improves infrastructure reliability, thereby eliminating the need for upper layer reliability protocols. For this mechanism, certain techniques called Congestion Notification (CN) have been proposed in the past decades. These techniques function on a hop-to-hop basis, where each layer-2 switch provides feedback on the buffer availability of that particular switch to the neighboring switches. Framed by this context, the study presents the following: a) A study of CN techniques for the hop-by-hop flow control in Ethernet. b) A discussion of the challenges encountered in determining an optimal threshold for CN techniques. c) The role of Fiber Channel over Ethernet (FCoE) and the relation of this protocol to hop-by-hop based flow control mechanisms. d) A comparison of IEEE standards for CN and hop-by-hop based flow control mechanisms.
2007
We discuss congestion control algorithms, using network awareness as a criterion to categorize different approaches. The first category ("the box is black") consists of a group of algorithms that consider the network as black box, assuming no knowledge of its state, other than the binary feedback upon congestion. The second category ("the box is grey") groups approaches that use measurements to estimate available bandwidth, level of contention or even the temporary characteristics of congestion. Due to the possibility of wrong estimations and measurements, the network is considered a grey box. The third category ("the box is green") contains the bimodal congestion control, which calculates explicitly the fair-share, as well as the network-assisted control, where the network communicates its state to the transport layer; the box now is becoming green.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proc. 44th Annual Allerton …, 2006
Ipsj Digital Courier, 2007
Indonesian Journal of Electrical Engineering and Computer Science, 2016
International Journal of Computer Applications, 2014
arXiv (Cornell University), 2022
IEEE Communications Letters, 2008
Computer Networks, 2009
Computer Networks, 2010
IEEE INFOCOM 2009 - The 28th Conference on Computer Communications, 2009