Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015
…
10 pages
1 file
Cloud computing paradigm offers elastic resources that can serve the scalable services. These resources can be scaled horizontally or vertically. The former is more powerful, which increases the number of same machines (scaled out) to retain the performance of the service. However, this scaling is tightly connected with existence of a balancer in front of the scaled resources that will balance the load among the end points. In this paper, we present a successful implementation of scalable low level load balancer, implemented on the network layer. The scalability is proved with series of experiments, and the results show that the balancer achieves even a super-linear speedup (speedup greater than the number of scaled resources). The paper discusses also about many other benefits that the balancer provides.
Proceedings of the 2014 Federated Conference on Computer Science and Information Systems, 2014
Cloud service providers offer their customers to rent or release hardware resources (CPU, RAM, HDD), which are isolated in virtual machine instances, on demand. Increased load on customer applications or web services require more resources than a physical server can supply, which enforces the cloud provider to implement some load balancing technique in order to scatter the load among several virtual or physical servers. Many load balancers exist, both centralized and distributed, with various techniques. In this paper we present a new solution for a low level load balancer (L3B), working on a network level of OSI model. When a network packet arrives, its header is altered in order to forward to some end-point server. After the server replies, the packet's header is also changed using the previously stored mapping and forwarded to the client. Unfortunately, the results of the experiments showed that this implementation did not provide the expected results, i.e., to achieve linear speedup when more server nodes are added.
Shanti Swaroop Moharana, Rajadeepan D. Ramesh & Digamber Powar, 2013
Cloud Computing is high utility software having the ability to change the IT software industry and making the software even more attractive. It has also changed the way IT companies used to buy and design hardware. The elasticity of resources without paying a premium for large scale is unprecedented in the history of IT industry. The increase in web traffic and different services are increasing day by day making load balancing a big research topic. Cloud computing is a new technology which uses virtual machine instead of physical machine to host, store and network the different components. Load balancers are used for assigning load to different virtual machines in such a way that none of the nodes gets loaded heavily or lightly. The load balancing needs to be done properly because failure in any one of the node can lead to unavailability of data.
arXiv (Cornell University), 2020
A high performance Layer-4 load balancer (LB) is one of the most important components of a cloud service infrastructure. Such an LB uses network and transport layer information for deciding how to distribute client requests across a group of servers. A crucial requirement for a stateful LB is per connection consistency (PCC); namely, that all the packets of the same connection will be forwarded to the same server, as long as the server is alive, even if the pool of servers or the assignment function changes. The challenge is in designing a high throughput, low latency solution that is also scalable. This paper proposes a highly scalable LB, called Prism, implemented using a programmable switch ASIC. As far as we know, Prism is the first reported LB that can process millions of connections per second and hundreds of millions connections in total, while ensuring PCC. This is due to the fact that Prism forwards all the packets in hardware, even during server pool changes, while avoiding the need to maintain a hardware state per every active connection. We implemented a prototype of the proposed architecture and showed that Prism can scale to 100 million simultaneous connections, and can accommodate more than one pool update per second.
SKIT Research Journal, VOLUME 9; ISSUE 2: 2019, 2019
As the use of the internet is increasing the corporate migrating their business from traditional computing to cloud computing and thus the number of the user is increasing on cloud & load is also increasing. Thus to provide congestion-free and reliable on-demand service to client load balancing method is needed. Many algorithms are proposed for load balancing & auto-scaling to handle the load .we can use cloud service to make load efficient model in the cloud environment. This load efficient model will provide the load balancing, scaling capabilities and monitoring of solutions in the cloud environment. To achieve the above mentioned, we use public cloud services such as amazon's EC2, ELB. This research is divided into four parts such as load balancing, auto-scaling, latency based routing, resource monitoring. We will implement the individual service and test while providing load from external software tool Putty and we will produce the result for efficient load balancing.
Cloud Computing is high utility software having the ability to change the IT software industry and making the software even more attractive. It has also changed the way IT companies used to buy and design hardware. The elasticity of resources without paying a premium for large scale is unprecedented in the history of IT industry. The increase in web traffic and different services are increasing day by day making load balancing a big research topic. Cloud computing is a new technology which uses virtual machine instead of physical machine to host, store and network the different components. Load balancers are used for assigning load to different virtual machines in such a way that none of the nodes gets loaded heavily or lightly. The load balancing needs to be done properly because failure in any one of the node can lead to unavailability of data.
International journal of engineering research and technology, 2018
Cloud computing is the new technology, which is totally dependent on the internet to maintain large applications, where data is shared over one platform to provide better services to the clients belonging to a different organization. Load balancing is one of the main challenges in cloud computing. It is a technique which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. So that Load balancing techniques help in optimal utilization of resources and hence in enhancing the overall performance of the system. The goal of load balancing is to minimize the resource consumption which will further reduce energy consumption .Its main motive is to optimize the usage of resources, boost turnout, fault tolerance, scalability, increase throughput, response time, etc [1]. It becomes a severe problem with the increase in list of users and types of applications on cloud. The main highlights of this paper is on the load balancing techniq...
2024
This paper presents a novel load-balancing algorithm for balancing load on the cloud environment which we call Response Time Efficient Load Balancer (ReT-ELBa). Load balancing involves a dynamic and equal distribution of workloads among processors or Virtual Machines to achieve better resource utilisation. The study gives an insight into the design, implementation and evaluation of an enhanced load-balancing algorithm that allocates tasks across Virtual Machines on the cloud, which minimises Response Time and increases resource utilisation. The ReT-ELBa distributes tasks based on their sizes, the requirements for their executions, as well as the state of the Virtual Machines. The Cloud Analyst simulation tool was employed to simulate or evaluate various cases for the ReT-ELBa in comparison with the Throttled and Round Robin algorithms. The study revealed that the ReT-ELBa outperformed the two algorithms in relation to Response Time. The ReT-ELBa also outperformed the two algorithms in relation to Data Centre Processing Time for all simulated cases, except one (Case 3), which recorded a small difference of 0.07ms. Even the Original Research Article
Scalable Computing: Practice and Experience, 2016
Cloud, fog and dew computing concepts offer elastic resources that can serve scalable services. These resources can be scaled horizontally or vertically. The former is more powerful, which increases the number of same machines (scaled out) to retain the performance of the service. However, this scaling is tightly connected with the existence of a balancer in front of the scaled resources that will balance the load among the end points. In this paper, we present a successful implementation of a scalable low-level load balancer, implemented on the network layer. The scalability is tested by a series of experiments for a small scale servers providing services in the range of dew computing services. The experiments showed that it adds small latency of several milliseconds and thus it slightly reduces the performance when the distributed system is underutilized. However, the results show that the balancer achieves even a super-linear speedup (speedup greater than the number of scaled resources) for a greater load. The paper discusses also many other benefits that the balancer provides.
In the cloud computing paradigm, the scheduling of computing resources is a critical part of cloud computing field. With increment in number of users and the type of applications on the cloud computing platform, effective utilization of resources in the system becomes a critical concern to ensure service level agreements (SLA).Resource distribution and the effective load balancing are necessary mechanism to increase the service level agreement (SLA) and better uses of available resources in heterogonous environment.
Cloud computing is the new word that describes an internet based computing technology which enables the users to access information and use various resources from the clouds from any location. This technology is evolving and developing with the increasing demands in the IT Sector and business environments. However out of the various issues that surround it, load balancing is one such important issue which aims for the even distribution of workload in the system for enhancing performance. This paper presents a review on various load balancing techniques and compares various parameters that are important for performance.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Ambient Intelligence and Humanized Computing, 2019
SpringerBriefs in Applied Sciences and Technology, 2016
International Journal on Cloud Computing: Services and Architecture, 2017
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2021
International Journal of Advanced Research in Computer Science, 2018
Intelligent Computing and Innovation on Data Science, 2020
IRJMETS Publication, 2022
Advances in Science, Technology and Engineering Systems Journal, 2021
ISSOS Publisher, 2019
INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY