Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1995, IEEE Journal on Selected Areas in Communications
…
12 pages
1 file
We address the problem of des%ning optimal buffer management policies in shared memory switches when packets already accepted in the switch can be dropped (pushed-out). Our goal is to maximize the overall throughput, or equivalently to minimize the overall loss probability in the system. For a system with two output ports, we prove that the optimal policy is of push-out with threshold type (POT). The same result holds if the optimality criterion is the weighted sum of the port loss probabilities. For this system, we also give an approximate method for the calculation of the optimal threshold, which we conjecture to be asymptotically correct. For the N-ported system, the optimal policy is not known in general, but we show that for a symmetric system (equal traf6c on all ports) it consists of always accepting arrivals when the buffer is not full, and dropping one from the longest queue to acco11Lmod8fe the new arrival when the buffer is full. Numerical results are provided which reveal an interesting and somewhat unexpected phenomenon. While the overall improvement in loss pmbability of the optimal POT policy over the optimal coordinate-convex policy is not very significant, the loss probab~ty of an individual output port remains approximately constant as the load on the other port varies and the optimal POT policy is applied, a property not shared by the optimal coordinateconvex policy.
IEICE Transactions on Information and Systems, 2008
The online buffer management problem formulates the problem of queueing policies of network switches supporting QoS (Quality of Service) guarantee. For this problem, several models are considered. In this paper, we focus on shared memory switches with preemption. We prove that the competitive ratio of the Longest Queue Drop (LQD) policy is 4M-4/3M-2 in the case of N=2, where N is the number of output ports in a switch and M is the size of the buffer. This matches the lower bound given by Hahne, Kesselman and Mansour.
IEEE/ACM Transactions on Networking, 1998
In shared-memory packet switches, buffer management schemes can improve overall loss performance, as well as fairness, by regulating the sharing of memory among the different output port queues. Of the conventional schemes, static threshold (ST) is simple but does not adapt to changing traffic conditions, while pushout (PO) is highly adaptive but difficult to implement. We propose a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more arrivals. An analysis of the DT algorithm shows that a small amount of buffer space is (intentionally) left unallocated, and that the remaining buffer space becomes equally distributed among the active output queues. We use computer simulation to compare the loss performance of DT, ST, and PO. DT control is shown to be more robust to uncertainties and changes in traffic conditions than ST control.
HPSR. 2005 Workshop on High Performance Switching and Routing, 2005., 2005
In this paper we study the efticient design o f buffer management policies for QOS efficient shared memory based fast packet switches. There are two aspects that are of interest. First is the packet size: whether all packets have same or different sizes. Second aspect is the value or space priority of the packets, do all packets have the same space priority o r different packets have different space priorities. We present two types of policies to achieve QOS goals for packets with different priorities: the push out scheme and the expelling scheme. For this paper we consider the case of packets of variable length with two space priorities and our goal is to minimize the total weighted packet loss. Simulation studies show that, expelling policies can outperform the push out policies when it comes to offering variable QOS for packets o f two different priorities and expelling policies also help improve the amount of admissible load Some other comparisons of push out and expelling policies are also presented using simulations.
IEEE/ACM Transactions on Networking, 1999
Shared-buffer ATM switches can have severe cell loss under asymmetrical or heavy loading conditions, which makes buffer management essential. In this paper, we study the shared buffer system under the class of all work-conserving pushout policies and derive the properties of the optimal policy, which gives the least-average expected total cell loss probability. In a 2 2 2 system with independent identically distributed Bernoulli arrivals, we show that the optimal policy can be characterized by a single threshold. In the case of correlated arrivals, modeled by a Discrete Batch Markovian Arrival Process, the optimal policy has multiple thresholds, one for each phase of the arrival process. For the N 2 N shared buffer ATM switch, we are unable to prove optimality of any policy, but study the system via simulations. We provide a dynamic buffer management policy and compare its performance with that of static threshold-type policies.
IEEE Communications Letters, 2002
In this letter we study the problem of the optimal design of buffer management policies within the class of pushout and expelling policies for a shared memory asynchronous transfer mode (ATM) switch or demultiplexer fed by traffic containing two different space priorities. A numerical study of the optimal policies for small buffer sizes is used to help design heuristics applicable to large buffer sizes. Simulation studies for large buffer systems are then presented.
International Journal of …, 2007
IET Communications, 2012
This paper presents a theoretical throughput analysis of two buffered-crossbar switches, called shared-memory crosspoint buffered (SMCB) switches, in which crosspoint buffers are shared by two or more inputs. In one of the switches, the sharedcrosspoint buffers are dynamically partitioned and assigned to the sharing inputs, and memory is sped up. In the other switch, inputs are arbitrated to determine which of them accesses the shared-crosspoint buffers, and memory speedup is avoided. SMCB switches have been shown to achieve a throughput comparable to that of a combined input-crosspoint buffered (CICB) switch with dedicated crosspoint buffers to each input but, with less memory than a CICB switch. The two analyzed SMCB switches use random selection as the arbitration scheme. We model the states of the shared crosspoint buffers of the two switches using a Markov-modulated process and prove that the throughput of the proposed switches approaches 100% under independent and identically distributed uniform traffic. In addition, we provide numerical evaluations of the derived formulas to show how the throughput approaches asymptotically to 100%.
First International Conference on Broadband Networks, 2004
We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.
Structural Information and Communication Complexity, 2015
Cloud applications bring new challenges to the design of network elements, in particular accommodating for the burstiness of traffic workloads. Shared memory switches represent the best candidate architecture to exploit buffer capacity; we analyze the performance of this architecture. Our goal is to explore the impact of additional traffic characteristics such as varying processing requirements and packet values on objective functions. The outcome of this work is a better understanding of the relevant parameters for buffer management to achieve better performance in dynamic environments of data centers. We consider a model that captures more of the properties of the target architecture than previous work and consider several scheduling and buffer management algorithms that are specifically designed to optimize its performance. In particular, we provide analytic guarantees for the throughput performance of our algorithms that are independent from specific distributions of packet arrivals. We furthermore report on a comprehensive simulation study which validates our analytic results.
Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042)
An objective of the next. generation network is the accommodation of services wit,h different QoS requirements. This indicates that the network should provide special mechanisms in order to prioritize the access to network node resources, such as link capacity and buffer space. We studied the performance of sharing buffer space and link capacity bet.ween sessions, the traffic of which is modeled by independent. general Markov-Modulated Fluid Process (MMFP) sources. For scheduling we use the Generalized Processor Sharing (G P S) policy, and improve previous upper bounds on the queue occupancy distributions. As an example of combining G P S with buffer management we apply our results to complete buffer sharing with virtual partitioning (VP+GPS). We also derive results on the resource allocation trade-off, with applications in traffic management and admission control. This work was supported in part by the NY State Center for Advanced Technologies in Telecommunications (C ATT) , and Sumitomo Electric Industries (USA). The work was done when G. Lapiotis was a research fellow in the NY State CATT, Polytechnic University.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of the ACM, 1995
200614th IEEE International Workshop on Quality of Service, 2006
Computer Communications, 1998
Telecommunication Systems
IEEE Transactions on Parallel and Distributed Systems, 2007
Proc. Australian Telecommunications Networking and Applications Conference, ATNAC
Theoretical Computer Science, 2013
Computers & Operations Research, 2008
ACM SIGCOMM Computer Communication Review, 2004
… Conference, 2006. GLOBECOM'06. …, 2006
Journal of Industrial and Management Optimization, 2011
[Conference Record] GLOBECOM '92 - Communications for Global Users: IEEE, 1992
IEEE Globecom 2006, 2006
Computer Networks and ISDN Systems, 1993