Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007, 21st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2007), Long Beach, California, USA, pp. 1-10, 26-30 March 2007
https://doi.org/10.1109/IPDPS.2007.370312…
10 pages
1 file
In this paper, we review two existing static load balancing schemes based on M/M/1 queues. We then use these schemes to propose two dynamic load balancing schemes for multi-user (multi-class) jobs in heterogeneous distributed systems. These two dynamic load balancing schemes differ in their objective. One tries to minimize the expected response time of the entire system while the other tries to minimize the expected response time of the individual users. The performance of the dynamic schemes is compared with that of the static schemes using simulations with various loads and parameters. The results show that, at low communication overheads, the dynamic schemes show superior performance over the static schemes. But as the overheads increase, the dynamic schemes (as expected) yield similar performance to that of the static schemes.
2007
In this paper, the problem of distributing load of a particular node over m identical nodes of a distributed computing system for minimizing turnaround time is studied first. Then an efficient technique is presented for dynamically scheduling jobs in large-scale, multiuser distributed computing systems that provides a balanced system performance with respect to the scheduling overhead. The nodes are scheduled independently and asynchronously with distinct execution initiation times corresponding to their earliest instant of being overloaded. The technique handles the task of resource management by dividing the nodes of the system into mutually overlapping subsets and thereby a node gets the system state information by querying only a few nodes. The approach is primarily targeted at systems that are composed of general purpose workstation computers having identical processors. Process scheduling decisions are driven by the desire to minimize turnaround time while maintaining fairness among competing applications and minimizing communication overhead. The performance analysis of the technique shows that it significantly reduces the total number of messages required for a node to take scheduling decision.
Microelectronics Reliability, 1996
In such systems it is possible for some nodes to be heavily loaded while others are lightly loaded, resulting in poor overall system performance. The purpose of load balancing is to improve performance by redistributing the workload among the nodes. In this paper four load balancing techniques are studied by simulations. The study is limited to a class of techniques where the jobs are lined up in a generic queue and sent to a central job dispatcher which allocates the job to a particular processor based upon the following criteria: nondeterministic routing, response time, system time and throughput. We propose an algorithm that reduces the computational complexity of algorithms ensuring minimum system time.
IEE Proceedings - Computers and Digital Techniques
Although dynamic load-balancing strategies have the potential of performing better than static strategies, they are inevitably more complex. Their complexity and the overheads involved may negate their benefits. A heterogeneous distributed system, with computers of different processing capability but the same functionality, has been examined for two dynamic and two static policies. The results show that both the dynamic and the static policies provide dramatic performance improvements. However, they show that, contrary to common belief, the performance provided by the static policies is not much inferior to that provided by the dynamic policies. Furthermore, if the overheads in load balancing are not negligibly small, static policies are more stable and can offer better performance than dynamic policies.
In a large distributed computing environment, like Grid, tasks can be submitted at any host and the random arrival of tasks in such an environment can cause some hosts to be heavily loaded while others are idle or lightly loaded. So, in such environment, load imbalance can potentially be reduced by appropriate transfers of tasks from heavily loaded computers (also known as 'senders') to idle or lightly loaded computers (also known as 'receivers'). Various load balancing algorithms are proposed during the last couple of decades or so. A comparative study on some of them along with their pitfalls in case of huge distributed environment, like Grid, is discussed in this paper.
Proceedings IEEE International Conference on Cluster Computing CLUSTR-03, 2003
Web application is being challenged to develop methods and techniques for large data processing at optimu m response time. There are technical challenges in dealing with the increasing demand to handle vast traffic on these websites. As number of users" increases, several problems are faced by web servers like bottleneck, delayed response time, load balancing and density of services. The whole traffic cannot reside on a single server and thus there is a fundamental requirement of allocating this huge traffic on mult iple load balanced servers. Distributing requests among servers in the web server clusters is the most important means to address such challenge, especially under intense workloads. In this paper, we propose a new request distribution algorith m for load balancing among web server clusters. The Dynamic Load Balancing among web servers take place based on user"s request and dynamically estimat ing server workload using mult iple parameters like processing and memo ry requirement, expected execution time and various time intervals. Our simulat ion results show that, the proposed method dynamically and efficiently balance the load to scale up the services, calculate average response time, average waiting time and server"s throughput on different web servers. At the end of the paper, we presented an experimentation of running proposed system wh ich proves the proposed algorith m is effic ient in terms of speed of processing, response time, server utilization and cost efficiency.
Load balancing is the process of redistributing the work load among nodes of the distributed system to improve both resource utilization and job response time while also avoiding a situation where some nodes are heavily loaded while others are idle or doing little work. A dynamic load balancing algorithm assumes no a priori knowledge about job behavior or the global state of the system, i.e., load balancing decisions are solely based on the current status of the system. The development of an effective dynamic load balancing algorithm involves many important issues: load estimation, load levels comparison, performance indices, system stability, amount of information exchanged among nodes, job resource requirements estimation, job's selection for transfer, remote nodes selection, and more. This paper presents and analyses the aforementioned issues that need to be considered in the development or study of a dynamic load balancing algorithm.
IEEE Transactions on Computers, 1998
Load balancing problems for multiclass jobs in distributed/parallel computer systems with general network configurations are considered. We construct a general model of such a distributed/parallel computer system. The system consists of heterogeneous host computers/processors (nodes) which are interconnected by a generally configured communication/interconnection network wherein there are several classes of jobs, each of which has its distinct delay function at each host and each communication link. This model is used to formulate the multiclass job load balancing problem as a nonlinear optimization problem in which the goal is to minimize the mean response time of a job.
International Journal of Electrical and Computer Engineering (IJECE), 2018
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks"t" performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
International Journal of Computer Applications, 2013
The anticipated uptake of Cloud computing, built on wellestablished research in Web Services, networks, utility computing, distributed computing and virtualization, will bring many advantages in cost, flexibility and availability for service users. Cloud is based on the data centers which are powerful to handle large number of users. As the cloud computing is a new style of computing over internet, it has many advantages along with some crucial issues to be resolved in order to improve reliability of cloud environment. Central to this is the implementation of an effective load balancing algorithm. This paper investigates two distributed load balancing algorithms which have been proposed for load balancing: round robin and throttled scheduling.
1988
Distributed Computing Systems (DCSs) evolved to provide communication among replicated and physically distributed computers as hardware costs decreased. Interconnecting physically distributed computers allows better communication and improved performance through redistribution (or load balancing) of workload. In this paper, we describe a load balancing strategy for a computer system connected by multiaccess broadcast network. The strategy uses the existing broadcast capability of these networks to implement an efficient search technique for finding stations with the maximum and the minimum workload. The overhead of distributing status information in the proposed strategy is independent of the number of stations. This result is significant because the primary overhead in load balancing lies in the collection of status information. An implementation of the proposed strategy on a network of Sun workstations is presented. It consists of two modules that are executed at all participating computers: the distributed-search module that isolates the maximally and minimally loaded computers, and the job-migration module that places a job based on the load extremes.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Computers, 1992
International Journal of High Performance Computing and Networking, 2019
Lecture Notes in Computer Science, 1998
Performance Evaluation, 2008
System Sciences, 1999. …, 1999
ube.ege.edu.tr
Information Processing Letters, 1990
International Journal of System Dynamics Applications, 2014
International Journal of High Performance Computing and Networking, 2004
International Journal of Electrical and Computer Engineering (IJECE), 2017
International Conference on Collaborative Computing: Networking, Applications and Worksharing, 2006
Concurrency and Computation-Practice and Experience, 2008
Lecture Notes in Computer Science, 1998
Theoretical Computer Science, 2006