Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007, International Journal of …
AI
Dynamic Buffer Management Using Per-Queue Thresholds addresses congestion in packet switching network nodes by introducing adaptive buffer management techniques. The proposed approach utilizes per-queue thresholds to optimize the handling of incoming packets, particularly during periods of uneven traffic distribution. Performance simulations demonstrate that the threshold-based method significantly enhances buffer utilization and reduces packet loss compared to traditional FIFO and complete sharing strategies.
IEEE/ACM Transactions on Networking, 1998
In shared-memory packet switches, buffer management schemes can improve overall loss performance, as well as fairness, by regulating the sharing of memory among the different output port queues. Of the conventional schemes, static threshold (ST) is simple but does not adapt to changing traffic conditions, while pushout (PO) is highly adaptive but difficult to implement. We propose a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more arrivals. An analysis of the DT algorithm shows that a small amount of buffer space is (intentionally) left unallocated, and that the remaining buffer space becomes equally distributed among the active output queues. We use computer simulation to compare the loss performance of DT, ST, and PO. DT control is shown to be more robust to uncertainties and changes in traffic conditions than ST control.
Computer Networks, 2001
The basic idea behind active queue management schemes such as random early detection (RED) is to detect incipient congestion early and to convey congestion noti®cation to the end-systems, allowing them to reduce their transmission rates before queues in the network over¯ow and packets are dropped. The basic RED scheme (and its newer variants) maintains an average of the queue length which it uses together with a number of queue thresholds to detect congestion. RED schemes drop incoming packets in a random probabilistic manner where the probability is a function of recent buer ®ll history. The objective is to provide a more equitable distribution of packet loss, avoid the synchronization of ows, and at the same time improve the utilization of the network. The setting of the queue thresholds in RED schemes is problematic because the required buer size for good sharing among TCP connections is dependent on the number of TCP connections using the buer. This paper describes a technique for enhancing the eectiveness of RED schemes by dynamically changing the threshold settings as the number of connections (and system load) changes. Using this technique, routers and switches can eectively control packet losses and TCP timeouts while maintaining high link utilization.
We propose a buffer management mechanism, called V-WFQ (Virtual Weighted Fair Queueing), for achieving an approximately fair allocation of bandwidth with a small amount of hardware in a high-speed network. The basic process for the allocation of bandwidth uses selective packet dropping that compares the measured input rate of the flow with an estimated fair share of bandwidth. Although V-WFQ is a hardware-efficient FIFO-based algorithm, it achieves almost ideal fairness in bandwidth allocation. V-WFQ can be implemented in the high-speed core routers of today's IP backbone networks to provide various high-quality services. We have investigated V-WFQ's performance in terms of fairness and link utilization through extensive simulation. The results of simulation show that V-WFQ achieves a good balance between fairness and link utilization under various simulation conditions. key words:
International Journal of Innovative Research in Computer and Communication Engineering, 2015
In this paper, the packet switching architecture with output queuing is used. Here the switch is internally non-blocking, but if packets destined to same outputs, output blocking will occur and (even if there are output queues) has a capacity of N*N2.An exact model of the switch has been developed which can be used to determine the blocking performance of the switch and obtain both its throughput and packet loss characteristics. In this architecture, each line card is connected by a dedicated point to point link to the central switch fabric. Two structures can be classified as Centralized and Distributed. Buffer arrangements are also categorized into output queued and combined input- output queued switches and hardware complexity of OQ, VOQ are also discussed
10th IEEE Symposium on Computers and Communications (ISCC'05)
A packet buffer for a protocol processor is a large shared memory space that holds incoming data packets in a computer network. This paper investigates four packet buffer management algorithms for a protocol processor including Dynamic Algorithm with Different Thresholds (DADT), which is proposed to reduce the packet loss ratio efficiently. The proposed algorithm takes the advantage of different packet sizes for each application by allocating buffer space for each queue proportionally. According to our simulation results, the DADT algorithm works well in reducing packet loss ratio compared to other three algorithms.
IEEE Global Telecommunications Conference, 2004. GLOBECOM '04., 2004
In this paper the new packet switch architecture with multiple output queuing (MOQ) is proposed. In this architecture the nonblocking switch fabric, which has the capacity of N × N 2 N × N 2 N × N 2 , and output buffers arranged into N separate queues for each output, are applied. Each of N queues in one output port stores packets directed to this output only from one input. Both switch fabric and buffers can operate at the same speed as input and output ports. This solution does not need any speedup in the switch fabric as well as arbitration logic for taking decisions which packets from inputs will be transferred to outputs. Two possible switch fabric structures are considered: the centralized structure with the switch fabric located on one or several separate boards, and distributed structure with the switch fabric distributed over line cards. Buffer arrangements as separate queues with independent write pointers or as a memory bank with one pointer are also discussed. The mean cell delay and cell loss probability as performance measures for the proposed switch architecture are evaluated and compared with performance of OQ architecture and VOQ architecture. The hardware complexity of OQ, VOQ and presented MOQ are also compared. We conclude that hardware complexity of proposed switch is very similar to VOQ switch but its performance is comparable to OQ switch.
IEEE Globecom 2006, 2006
Virtual Output Queuing is widely used by high- speed packet switches to overcome head-of-line blocking. This is done by means of matching algorithms. In fixed-length VOQ switches, variable-length IP packets are segmented into fixed- length cells at the inputs. When a cell is transferred to its destination output, it will stay in the reassembly buffer and wait for the other
2007 16th International Conference on Computer Communications and Networks, 2007
Packet buffers in a smart network interface card are managed in a way to reduce any packet losses from high-speed burst incoming data. Currently, two types of packet buffer management techniques are used. They are static buffer management and dynamic buffer management techniques. Dynamic buffer management techniques are more efficient than the static ones because they change the threshold value according to network traffic conditions. However, current dynamic techniques cannot adjust threshold instantaneously. Thus, packet losses with dynamic techniques are still high. We, therefore, propose a history-based buffer management scheme to address the issue. Our experiment results show that the history-based scheme reduces packet loss by 11% to 15.9% as compared to other conventional dynamic algorithms. Keywords -packet buffer; network interface card; layer 3 and 4 protocols; VHDL; static and dynamic buffer management.
HPSR. 2005 Workshop on High Performance Switching and Routing, 2005., 2005
In this paper we study the efticient design o f buffer management policies for QOS efficient shared memory based fast packet switches. There are two aspects that are of interest. First is the packet size: whether all packets have same or different sizes. Second aspect is the value or space priority of the packets, do all packets have the same space priority o r different packets have different space priorities. We present two types of policies to achieve QOS goals for packets with different priorities: the push out scheme and the expelling scheme. For this paper we consider the case of packets of variable length with two space priorities and our goal is to minimize the total weighted packet loss. Simulation studies show that, expelling policies can outperform the push out policies when it comes to offering variable QOS for packets o f two different priorities and expelling policies also help improve the amount of admissible load Some other comparisons of push out and expelling policies are also presented using simulations.
Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98CH36169)
IEEE Journal on Selected Areas in Communications, 1999
Recently, there has been much interest in using active queue management in routers in order to protect users from connections that are not very responsive to congestion notification. A recent Internet draft recommends schemes based on random early detection for achieving these goals, to the extent that it is possible, in a system without "per-flow" state. However, a "stateless" system with first-in/first-out (FIFO) queueing is very much handicapped in the degree to which flow isolation and fairness can be achieved. Starting with the observation that a "stateless" system is but one extreme in a spectrum of design choices and that per-flow queueing for a large number of flows is possible, we present active queue management mechanisms that are tailored to provide a high degree of isolation and fairness for TCP connections in a gigabit IP router using per-flow queueing. We show that IP flow state in a router can be bounded if the scheduling discipline used has finite memory, and we investigate the performance implications of different buffer management strategies in such a system. We show that merely using perflow scheduling is not sufficient to achieve effective isolation and fairness, and it must be combined with appropriate buffer management strategies.
Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042)
An objective of the next. generation network is the accommodation of services wit,h different QoS requirements. This indicates that the network should provide special mechanisms in order to prioritize the access to network node resources, such as link capacity and buffer space. We studied the performance of sharing buffer space and link capacity bet.ween sessions, the traffic of which is modeled by independent. general Markov-Modulated Fluid Process (MMFP) sources. For scheduling we use the Generalized Processor Sharing (G P S) policy, and improve previous upper bounds on the queue occupancy distributions. As an example of combining G P S with buffer management we apply our results to complete buffer sharing with virtual partitioning (VP+GPS). We also derive results on the resource allocation trade-off, with applications in traffic management and admission control. This work was supported in part by the NY State Center for Advanced Technologies in Telecommunications (C ATT) , and Sumitomo Electric Industries (USA). The work was done when G. Lapiotis was a research fellow in the NY State CATT, Polytechnic University.
IJCSNS, 2006
This paper investigates the problem of enabling Quality of Service (QoS) of multimedia traffic at the input port of high-performance input-queued packet switches using a simulation-based evaluation. We focus on the possibility of assuring QoS of multimedia traffic in such switches by implementing traffic prioritization at the input port where each input-queue has been modified to provide a separate buffer for each of the service classes. The multimedia traffic can be categorized into three classes based on its real-time properties and loss tolerance, and assigned a separate queue for each class. We select appropriate models for each of three types of traffic: video, voice, and data. Then, we propose an efficient dynamic scheduling strategy by implementing multimedia traffic prioritization at the input port of input-queued packet switches. Simulation-based comparisons show that while the static priority scheme is beneficial for highest priority class at the expense of the others, the dynamic prioritization serves fairly well all the classes in terms of delay and loss requirements.
Computer Communications, 2005
This paper addresses scheduling and memory management in input queued switches having finite buffer with the objective of improving the performance in terms of throughput and average delay. Most of the prior works on scheduling related to input queued switches assume infinite buffer space. In practice, buffer space being a finite resource, special memory management scheme becomes essential. Maximum weighted matching (MWM) algorithm, which is known to be the optimal in terms of throughput for infinite buffer case turns out to be sub optimal in the presence of memory limitations. We introduce a buffer management scheme called iSMM (Integrated Scheduling and Memory Management) that can be employed jointly with any deterministic iterative scheduling algorithm. We applied iSMM over iSLIP, a popular scheduling algorithm, and study its effect under various input traffic conditions. Simulation results indicate iSMM to perform better than iSLIP and MWM both in terms of throughput and delay especially under non-uniform traffic.
IEEE Journal on Selected Areas in Communications, 1995
We address the problem of des%ning optimal buffer management policies in shared memory switches when packets already accepted in the switch can be dropped (pushed-out). Our goal is to maximize the overall throughput, or equivalently to minimize the overall loss probability in the system. For a system with two output ports, we prove that the optimal policy is of push-out with threshold type (POT). The same result holds if the optimality criterion is the weighted sum of the port loss probabilities. For this system, we also give an approximate method for the calculation of the optimal threshold, which we conjecture to be asymptotically correct. For the N-ported system, the optimal policy is not known in general, but we show that for a symmetric system (equal traf6c on all ports) it consists of always accepting arrivals when the buffer is not full, and dropping one from the longest queue to acco11Lmod8fe the new arrival when the buffer is full. Numerical results are provided which reveal an interesting and somewhat unexpected phenomenon. While the overall improvement in loss pmbability of the optimal POT policy over the optimal coordinate-convex policy is not very significant, the loss probab~ty of an individual output port remains approximately constant as the load on the other port varies and the optimal POT policy is applied, a property not shared by the optimal coordinateconvex policy.
Proceedings Third IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC 2000) (Cat. No. PR00607), 2000
There have been many debates about the feasibility of providing guaranteed Quality of Service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission critical and time sensitive applications. This work explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of services are considered where TCP connections are assigned to these classes and mapped to two underlying queues with round robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.
Computer Networks, 1999
Efficient and fair use of buffer space in an Asynchronous Transfer Mode (ATM) switch is essential to gain high throughput and low cell loss performance from the network. In this paper a shared buffer architecture associated with threshold-based virtual partition among output ports is proposed. Thresholds are updated based on traffic characteristics on each outgoing link, so as to adapt to traffic loads. The system behavior under varying traffic patterns is investigated via simulation; cell loss rate is the quality of service (QoS) measure used in this study. Our study shows that the threshold based dynamic buffer allocation scheme ensures a fair share of the buffer space even under bursty loading conditions.
2016
In this work, we study the problem of buffer management in network switches from an algorithmic perspective. In a typical switching scenario, packets with different service demands arrive at the input ports of the switch and are stored in buffers (queues) of limited capacity. Thereafter, they are transferred over the switching fabric to their corresponding output ports where they join other queues. Finally, packets are transmitted out of the switch through its outgoing links to their next destinations in the network. Due to limitations in the link bandwidth and buffer capacities, buffers may experience events of overflow and thus it becomes inevitable to drop some packets. In other switching models, packets that are sensitive to delay are dropped if they exceed a specific deadline inside the queue. We consider multiple models of switching with the goal of maximizing the throughput of the switch. If all packets are treated equally, i.e., corresponding to the besteffort concept of the...
2008
Grid technologies are emerging as the next generation of distributed computing, allowing the aggregation of resources that are geographically distributed across different locations. The network remains an important requirement for any Grid application, as entities involved in a Grid system (such as users, services, and data) need to communicate with each other over a network. The performance of the network must therefore be considered when carrying out tasks such as scheduling, migration or monitoring of jobs. Network buffers management policies affect the network performance, as they can lead to poor latencies (if buffers become too large), but also leading to a lot of packet droppings and low utilization of links, when trying to keep a low buffer size. Therefore, network buffers management policies should be considered when simulating a real Grid system. In this paper, we introduce network buffers management policies into the GridSim simulation toolkit. Our framework allows new policies to be implemented easily, thus enabling researchers to create more realistic network models. Fields which will harness our work are scheduling, or QoS provision. We present a comprehensive description of the overall design and a use case scenario demonstrating the conditions of links varied over time. * Corresponding author. Tel.: +34 967 59 92 00 ext. 2693; fax: +34 967 59 93 43
Lecture Notes in Computer Science
Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ≥ 1 packets at every output. We analyze the resource requirements of CIOQ policies in terms of the required fabric speedup and the additional buffer capacity needed at the CIOQ inputs: A CIOQ policy is said to be (s, b)-valid (for OQ emulation) if a CIOQ employing this policy can emulate an OQ switch using fabric speedup s ≥ 1, and without exceeding buffer occupancy b at any input port. For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid at speedup B, no CIOQ policy is valid at speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF in particular is not valid at any speedup s < B. We then introduce a CIOQ policy, CEH, that is valid at speedup s ≥ p 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + j B−1 s−1 B−1 s−2 m. We also show that a greedy variant of the CCF policy is (2, B)-valid for the emulation of non-preemptive OQ algorithms with PIFO service disciplines.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.