Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, IEEE/ACM Transactions on Networking
Buffering architectures and policies for their efficient management are core ingredients of a network architecture. However, despite strong incentives to experiment with and deploy new policies, opportunities for changing anything beyond minor elements are limited. We introduce a new specification language, OpenQueue, that allows to express virtual buffering architectures and management policies representing a wide variety of economic models. OpenQueue allows users to specify entire buffering architectures and policies conveniently through several comparators and simple functions. We show examples of buffer management policies in OpenQueue and empirically demonstrate its impact on performance in various settings.
2017 IEEE 25th International Conference on Network Protocols (ICNP), 2017
Buffering architectures and policies for their efficient management constitute one of the core ingredients of a network architecture. However, despite strong incentives to experiment with, and deploy, new policies, the opportunities for alterating anything beyond minor elements of such policies are limited. In this work we introduce a new specification language, OpenQueue, that allows users to specify entire buffering architectures and policies conveniently through several comparators and simple functions. We show examples of buffer management policies in OpenQueue and empirically demonstrate its direct impact on performance in various settings.
Proceedings of the 2016 Symposium on Architectures for Networking and Communications Systems - ANCS '16, 2016
Buffering architectures and policies for their efficient management constitute one of the core ingredients of a network architecture. In this work we introduce a new specification language, BASEL, that allows to express virtual buffering architectures and management policies representing a variety of economic models. BASEL does not require the user to implement policies in a high-level language; rather, the entire buffering architecture and its policy are reduced to several comparators and simple functions. We show examples of buffer management policies in BASEL and demonstrate empirically the impact of various settings on performance.
1998
As the cost of computing power decreases and network traffic patterns become more complex,it becomes worthwhile to consider the benefits of allowing users to specify policies for managingtheir traffic within the network. Active networking is a new design paradigm in which thenetwork is architected not merely to forward packets, but also to be dynamically programmedin order to support per-user services.
International Journal of …, 2007
2007
As the Internet becomes more mature, there is a realization that improving the performance of routers has the potential to substantially improve Internet performance in general. Currently, most routers forward packets in a First-In-First-Out (FIFO) order. However, the diversity of applications supported by modern IP-based networks has resulted in unpredictable packet flows, and heterogeneous network traffic. Thus, it is becoming more reasonable to consider differentiating between different types of packets, and perhaps to consider allowing packets to specify a deadline by which it must be processed. These issues have made buffer management at routers a critical issue in providing effective quality of service to the various applications that use the network. In this paper, we study an online problem in which each packet is described by its discrete arrival time, non-negative weight and discrete deadline; arriving packets are buffered for delivery and all packets have the same process...
Computer Communications, 1998
An appropriate service for data traffic in ATM networks requires large buffers in network nodes. However, large buffers without a proper allocation scheme may lead to an unsatisfactory quality of service. Most present allocation schemes either necessitate a complicated queueing system or they do not offer a sufficient fairness. This paper describes a rather simple buffer management scheme that results in fair allocation of bandwidth among competing connections by using only a FIFO buffer. The performance and fairness of the allocation scheme has been analysed by means of a simulation program.
IEICE Transactions on Communications, 2013
OpenFlow, originally proposed for campus and enterprise network experimentation, has become a promising SDN architecture that is considered as a widely-deployable production network node recently. It is, in a consequence, pointed out that OpenFlow cannot scale and replace today's versatile network devices due to its limited scalability and flexibility. In this paper, we propose OpenQFlow, a novel scalable and flexible variant of OpenFlow. OpenQFlow provides a fine-grained flow tracking while flow classification is decoupled from the tracking by separating the inefficiently coupled flow table to three different tables: flow state table, forwarding rule table, and QoS rule table. We also develop a two-tier flowbased QoS framework, derived from our new packet scheduling algorithm, which provides performance guarantee and fairness on both granularity levels of micro-and aggregate-flow at the same time. We have implemented OpenQFlow on an off-the-shelf microTCA chassis equipped with a commodity multicore processor, for which our architecture is suited, to achieve high-performance with carefully engineered software design and optimization.
ICC 2001. IEEE International Conference on Communications. Conference Record (Cat. No.01CH37240), 2001
Abstrucr-This paper presents architecture and mechanisms to support multiple QoS under the DiffServ paradigm. On the data plane, we present a node architecture based on the virtual time reference system (VTRS), which is a unifying scheduling framework for scalable support of the guaranteed service. The key building block of our node architecture is the core-stateless virtual clock (CSVC) scheduling algorithm, which, in terms of providing delay guarantee, has the same expressive power as a stateful weighted fair queueing (WFQ) scheduler. Based on the CSVC scheduler, we design a node architecture that is capable of supporting integrated transport of the guaranteed service (GS), the premium service (PS), the assured service (AS), and the traditional besteffort (BE) service. On the control plane, we present a BB architecture to provideflexible resource allocation and QoS provisioning. Simulation results demonstrate that our architecture and mechanisms can provide scalable andflexible transport of integrated traffic of the GS, the PS, the AS, and the BE services. 'Here a flow can be either an individual user flow, or an aggregate traffic flow of multiple user flows, defined in any appropriate fashion.
Telecommunication Systems
The Guaranteed Frame Rate (GFR) service is viewed as the most promising ATM service for carrying aggregate TCP/IP traffic over large distances. In this work, we develop an analytical model to assess the performance of TCP over the Differential Fair Buffer Allocation implementation suggested by the ATM Forum. We consider the problem of a single GFR VC fed by multiple TCP Reno sources. The proposed model distinguishes itself from prior work in two ¥ ways: it captures the behavior of aggregate TCP traffic and it explicitly allows for queue analysis. From a modeling point of view, our study shows that the reactive behavior of TCP in congestion avoidance can be approximated by a two-node Markov chain. In terms of performance measures, we quantify the impact of traffic aggregation on queueing performance. Among many other results, our model predicts that, although the mean queue length seems to reach a maximum at a certain aggregation level, the mean loss probability appears to increase linearly with the number of sources.
Computer Networks, 2010
Most existing criteria for sizing router buffers rely on explicit formulation of the relationship between buffer size and characteristics of Internet traffic. However, this is a non-trivial, if not impossible, task given that the number of flows, their individual RTTs, and congestion control methods, as well as flow responsiveness, are unknown. In this paper, we undertake a completely different approach that uses control-theoretic buffer size tuning in response to traffic dynamics. Motivated by the monotonic relationship between buffer size and loss rate and utilization, we design a mechanism called Adaptive Buffer Sizing (ABS), which is composed of two Integral controllers for dynamic buffer adjustment and two gradient-based components for intelligent parameter training. We demonstrate via ns2 simulations that ABS successfully stabilizes the buffer size at its minimum value under given constraints, scales to a wide spectrum of flow populations and link capacities, exhibits fast convergence rate and stable dynamics in various network settings, and is robust to load changes and generic Internet traffic (including FTP, HTTP, and non-TCP flows). All of these demonstrate that ABS is a promising mechanism for tomorrow's router infrastructure and may be of significant interest for the ongoing collaborative research and development efforts (e.g., GENI and FIND) in reinventing the Internet.
2016
This paper studies the use of Palermo receiver side congestion control as an alternative to Active Queue Management (AQM) for end users needing to improve latency and fair sharing in their incoming traffic. Because of the end users lacking administrative access to ISP devices in order to tune their incoming bottleneck queue, this alternative results in a valid option to increase perfor-
ACM SIGCOMM Computer Communication Review, 1998
In recent years, a number of link scheduling algorithms have been proposed that greatly improve upon traditional FIFO scheduling in being able to assure rate and delay bounds for individual sessions. However, they cannot be easily deployed in a backbone environment with thousands of sessions, as their complexity increases with the number of sessions. In this paper, we propose and analyze an approach that uses a simple buffer management scheme to provide rate guarantees to individual flows (or to a set of flows) multiplexed into a common FIFO queue. We establish the buffer allocation requirements to achieve these rate guarantees and study the trade-off between the achievable link utilization and the buffer size required with the proposed scheme. The aspect of fair access to excess bandwidth is also addressed, and its mapping onto a buffer allocation rule is investigated. Numerical examples are provided that illustrate the performance of the proposed schemes. Finally, a scalable architecture for QoS provisioning is presented that integrates the proposed buffer management scheme with WFQ scheduling that uses a small number of queues.
Algorithmica, 2005
We consider a network providing Differentiated Services (Diffserv) which allow Internet service providers (ISP) to offer different levels of Quality of Service (QoS) to different traffic streams. We study two types of buffering policies that are used in network switches supporting QoS. In the FIFO type, packets must be transmitted in the order they arrive. In the bounded-delay type, each packet has a maximum delay time by which it must be transmitted, or otherwise it is lost. In both models, the buffer space is limited, and packets are lost if the buffer is full. Each packet has an intrinsic value, and the goal is to maximize the total value of transmitted packets. Our main contribution is an algorithm for the FIFO model for arbitrary packet values that for the first time achieves a competitive ratio better than 2, namely 2 − for a constant > 0. We also describe an algorithm for the bounded delay model that simulates our algorithm for the FIFO model, and show that it achieves the same competitive ratio.
Journal of Network and Computer Applications, 2019
OpenFlow supports internal bu↵ering of data packets in Software-Defined Networking (SDN) switch whereby a fraction of data packet header is sent to the controller instead of an entire data packet. This internal bu↵ering increases the robustness and the utilization of the link between SDN switches and the controller by absorbing temporary burst of packets which may overwhelm the controller. Existing queuing models for an SDN have focused on the switches that immediately sends packets to the controller for decisioning, with no existing models investigating the impact of the internal bu↵er in SDN software and hardware switches. In this paper, we propose a unified queueing model to characterise the performance of SDN software and hardware switches with the internal bu↵er. This unified queueing model is an analytical tool for network engineers to predict a delay and loss during SDN deployments in delay and loss sensitive environments. Our results show that a hardware switch achieves up to 80% lower average packet transfer delay and 99% lower packet loss rate at the cost of requiring up to 50% more queue capacity than a software switch. The proposed models are validated with a discrete event simulation, where the error between 0.6%-2.8% was observed for both average packet transfer delay and average packet loss rate. Moreover, a hardware switch outperforms a software switch with increasing number of hosts per switch suggesting that a hardware switch has better scalability. We use the insights from the model to develop guidelines that help network engineers decide between a software and hardware switch in their SDN deployments.
Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98CH36169)
Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042)
An objective of the next. generation network is the accommodation of services wit,h different QoS requirements. This indicates that the network should provide special mechanisms in order to prioritize the access to network node resources, such as link capacity and buffer space. We studied the performance of sharing buffer space and link capacity bet.ween sessions, the traffic of which is modeled by independent. general Markov-Modulated Fluid Process (MMFP) sources. For scheduling we use the Generalized Processor Sharing (G P S) policy, and improve previous upper bounds on the queue occupancy distributions. As an example of combining G P S with buffer management we apply our results to complete buffer sharing with virtual partitioning (VP+GPS). We also derive results on the resource allocation trade-off, with applications in traffic management and admission control. This work was supported in part by the NY State Center for Advanced Technologies in Telecommunications (C ATT) , and Sumitomo Electric Industries (USA). The work was done when G. Lapiotis was a research fellow in the NY State CATT, Polytechnic University.
IEEE Journal on Selected Areas in Communications, 1999
Recently, there has been much interest in using active queue management in routers in order to protect users from connections that are not very responsive to congestion notification. A recent Internet draft recommends schemes based on random early detection for achieving these goals, to the extent that it is possible, in a system without "per-flow" state. However, a "stateless" system with first-in/first-out (FIFO) queueing is very much handicapped in the degree to which flow isolation and fairness can be achieved. Starting with the observation that a "stateless" system is but one extreme in a spectrum of design choices and that per-flow queueing for a large number of flows is possible, we present active queue management mechanisms that are tailored to provide a high degree of isolation and fairness for TCP connections in a gigabit IP router using per-flow queueing. We show that IP flow state in a router can be bounded if the scheduling discipline used has finite memory, and we investigate the performance implications of different buffer management strategies in such a system. We show that merely using perflow scheduling is not sufficient to achieve effective isolation and fairness, and it must be combined with appropriate buffer management strategies.
This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler uses a combination of advanced techniques to translate these policies into code that can be executed on network elements including a constraint solver that allocates bandwidth using parameterizable heuristics. To facilitate dynamic adaptation, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and scalability of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies and scalable infrastructure for enforcing them.
We propose a buffer management mechanism, called V-WFQ (Virtual Weighted Fair Queueing), for achieving an approximately fair allocation of bandwidth with a small amount of hardware in a high-speed network. The basic process for the allocation of bandwidth uses selective packet dropping that compares the measured input rate of the flow with an estimated fair share of bandwidth. Although V-WFQ is a hardware-efficient FIFO-based algorithm, it achieves almost ideal fairness in bandwidth allocation. V-WFQ can be implemented in the high-speed core routers of today's IP backbone networks to provide various high-quality services. We have investigated V-WFQ's performance in terms of fairness and link utilization through extensive simulation. The results of simulation show that V-WFQ achieves a good balance between fairness and link utilization under various simulation conditions. key words:
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.