Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
—Cloudlet scheduling seems to be the most fundamental problem of cloud computing as per Infrastructure as a Service (IaaS). Proper scheduling in cloud lead to load balancing, minimization of makespan, and adequate resources utilization. To meet consumers' expectations, the execution of cloudlet simultaneously is required. Many algorithms have been implemented to solve the cloud scheduling problem. This include Min-Min which gave priority to cloudlet with minimum completion time. Min-Min scheduling algorithm has two clear weaknesses; a high value of makespan being generated and low resource utilization. To address these problems, this research proposes an Extended Min-Min Algorithm which assign cloudlet base on the differences between maximum and minimum execution time of cloudlets. Cloudsim was used to implement and compare the performances of the proposed algorithm with the benchmarks. The results of the extensive experiments show that the proposed algorithm is able to performed better in terms of makespan minimization in compare to the existing heuristics.
The fundamental problems with cloud computing environment are resource allocation and cloudlets scheduling. When scheduling cloudlets in cloud environment, different cloudlets needs to be executed simultaneously by the available resources in order to meet consumers' expectations and to achieve better performances by minimizing makespan and balancing load effectively. To achieve this, we proposed a new noble mechanism called Modified Max-Min (MMax-Min) algorithm, inspired from Max-Min algorithm. The proposed algorithm finds a cloudlet with maximum completion time and minimum completion time and assigns either of the cloudlets for execution according to the specifications for the purpose of boosting up cloud scheduling processes and increasing throughput. From the results of the simulation using CloudSim, it shows that our proposed approach is able to produce good quality solutions, producing good values of makespan and balancing load effectively as compared to the standard Max-Min, and Round Robin algorithms.
International Journal for Research in Applied Science and Engineering Technology IJRASET, 2020
Cloud computing is an information technology paradigm that enables access to various shared pools of configurable system resources and higher level services. It provides delivery of computing services like storage, servers, database etc. Task scheduling is an important part in cloud computing for limited number of heterogeneous resources and increasing number of user tasks. In case of great cloudlets with high Size may lead fail in load balancing, increased makes pan and decreased average resource utilization. This paper provides an efficient algorithm to solve the above issues in task scheduling. Index Terms: HCA, EHCA, OCT, EDCB I. INTRODUCTION Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server. Clouds may be limited to a single organization (enterprise clouds), or be available to many organizations (public cloud).Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture and autonomic and utility computing has led to growth in cloud computing. In recent years, cloud computing has emerged as heterogeneous distributed computing system to manage and allocate computing resources to user applications over the Internet in a self-service, dynamically scalable and metered manner[1]. User application can be divided in small size or high size of tasks (cloudlets). The advantage of high size is that the number of cloudlets is more reduced than with small size. However, a great number of large cloudlets whose cloudlet corresponds to high size lead the cloud system to fail in good load balancing and lower completion time. This is caused by their high execution time and delay during their execution on virtual machine (VM). Due to the security issues related to virtualization [7], all these VMs are distributed among increasing number of cloud users to complete running of their cloudlets. This means that the number of these VMs is much more reduced per single user Optimizing task scheduling with good load balancing and minimum makespan which takes into account of large and small cloudlets under resources capacity constraints, such as number of VMs and processing speed, still remains the major challenging issue not well solved in cloud computing[1]. Therefore, task scheduling being an NP-hard problem, then a best optimal heuristic cloudlet allocation is required for managing high number of large and small cloudlets submitted by a single user to maximize both load balancing and resource utilization and to minimize overall completion time. Based on task scheduling, many heuristic algorithms have been proposed to minimize the execution time of a task and finish time such as first strategy algorithm [1], round robin (RR) [1] and standard deviation based modified cuckoo optimization algorithm (SDMCOA). FSA is based on a deadline, length of cloudlets and speed of execution of VMs, and vector that defines the number of cloudlets per VM, is distributed to each VM. FSA lacks in considering the completion time of VM and size of large cloudlets. As a result, this lack leads to higher completion time and improper load balancing. RR uses the ring as its queue to store cloudlets, and it allocates resources in circular order without using priority of the cloudlets. Each cloudlet in a queue has a same fixed unit of time, called quantum, allocated by scheduler and it
International Journal of Computer Applications, 2016
Cloud computing is a new archetype that provides dynamic computing services to cloud users through the support of datacenters that employs the services of datacenter brokers which discover resources and assign them Virtually. The focus of this research is to efficiently optimize resource allocation in the cloud by exploiting the Max-Min scheduling algorithm and enhancing it to increase efficiency in terms of completion time (makespan). This is key to enhancing the performance of cloud scheduling and narrowing the performance gap between cloud service providers and cloud resources consumers/users. The current Max-Min algorithm selects tasks with maximum execution time on a faster available machine or resource that is capable of giving minimum completion time. The concern of this algorithm is to give priority to tasks with maximum execution time first before assigning those with the minimum execution time for the purpose of minimizing makespan. The drawback of this algorithm is that, the execution of tasks with maximum execution time first may increase the makespan, and leads to a delay in executing tasks with minimum execution time if the number of tasks with maximum execution time exceeds that of tasks with minimum execution time, hence the need to improve it to mitigate the delay in executing tasks with minimum execution time. CloudSim is used to compare the effectiveness of the improved Max-Min algorithm with the traditional one. The experimented results show that the improved algorithm is efficient and can produce better makespan than Max-Min and DataAware.
Sensors, 2021
Cloud computing is an emerging paradigm that offers flexible and seamless services for users based on their needs, including user budget savings. However, the involvement of a vast number of cloud users has made the scheduling of users’ tasks (i.e., cloudlets) a challenging issue in selecting suitable data centres, servers (hosts), and virtual machines (VMs). Cloudlet scheduling is an NP-complete problem that can be solved using various meta-heuristic algorithms, which are quite popular due to their effectiveness. Massive user tasks and rapid growth in cloud resources have become increasingly complex challenges; therefore, an efficient algorithm is necessary for allocating cloudlets efficiently to attain better execution times, resource utilisation, and waiting times. This paper proposes a cloudlet scheduling, locust inspired algorithm to reduce the average makespan and waiting time and to boost VM and server utilisation. The CloudSim toolkit was used to evaluate our algorithm’s eff...
IOSR Journal of Computer Engineering, 2016
The main content of Cloud Computing is to manage Software application, data storage and processing capacity which are assigned to external users on demand through the internet and pay only for what they use. Task scheduling in cloud computing is the biggest challenges because many tasks need to be executed by the available resources in order to meet user's requirements. To achieve best performance, minimize total completion time, minimize response time and maximize resources utilization there is need to address these challenges. This paper studies different task scheduling algorithms and an Enhanced Min-min algorithm is developed. The algorithm uses the advantages of Min-min and avoids its drawbacks. The main idea of the proposed algorithm is to allocate tasks to resources appropriately in order to achieve an effective load balancing and decrease completion time. The experimental results indicate that the proposed algorithm compared to Min-min and Max-min produces better Makespan and improved resources utilization.
In cloud computing, the user can access the shared resources over the network in a service-based environment. The on-request access to computing resources such as networks, servers, and applications is the major idea of cloud computing. With the development in computing technologies, cloud computing has added a new paradigm to user services that permits accessing Information Technology services on the bottom of pay-per-use at any time as well as any location. Owing to flexibility in cloud services, many organizations are shifting their business to the cloud and service providers are creating more data centers to offer services to users. However, it is vital to provide cost-effective execution of tasks and appropriate utilization of resources. Cloud computing manages a variability of virtualized resources, which makes scheduling a serious component. In the cloud, a client may utilize numerous thousand virtualized assets for all task. Consequently, manual scheduling is not a possible solution. Task scheduling is one of the vital techniques in the cloud computing environment. It is essential for allocating tasks to the appropriate resources and optimizing the overall system performance. The simple idea behind task scheduling is to slate tasks to minimalize execution time and maximize performance as well as resource utilization. In this paper we have surveyed Min-Min, Max-Min and Hybrid algorithm to analyse which algorithm gives better makespan and resource utilization.
Menoufia Journal of Electronic Engineering Research
This paper presents a new hybrid approach, called ACOSA, for cloudlets scheduling to enhance the scheduler behavior in Cloud computing (CC) environment and to overcome the results oscillation problem of the existing meta-heuristic scheduling algorithms. The proposed approach combines both the Ant Colony Optimization (ACO) and Simulated Annealing (SA) algorithm to improve both quality of solutions and time complexity of the scheduling algorithm. The proposed approach is evaluated by using the well-known CloudSim, and the results are compared with the ant colony and simulated annealing separately in terms of schedule length, load balancing, and time complexity. It decreases the schedule length by 29.75% with SA and 12.25% with ACO. The ACOSA provides higher load balancing degree. It improves the balancing degree ratio by 36.36% than SA and 12.13% than ACO algorithms.
Regular
Nowadays, with the huge development of information and computing technologies, the cloud computing is becoming the highly scalable and widely computing technology used in the world that bases on pay-per-use, remotely access, Internet-based and on-demand concepts in which providing customers with a shared of configurable resources. But, with the highly incoming user’s requests, the task scheduling and resource allocation are becoming major requirements for efficient and effective load balancing of a workload among cloud resources to enhance the overall cloud system performance. For these reasons, various types of task scheduling algorithms are introduced such as traditional, heuristic, and meta-heuristic. A heuristic task scheduling algorithms like MET, MCT, Min-Min, and Max-Min are playing an important role for solving the task scheduling problem. This paper proposes a new hybrid algorithm in cloud computing environment that based on two heuristic algorithms; Min-Min and Max-Min alg...
2018
Cloud computing is one of the most advanced technologies to present computerized generation. Scheduling plays a major role in it. The connectivity of Virtual Machines (VM) to schedule the assigned tasks (cloudlet) is a most attractive field to research. This paper introduces a confined Cloudlet Migration based scheduling algorithm using Enhanced-First Come First Serve (CMeFCFS). The objective of this work is to minimize the makespan, cost and to optimize the resource utilization. The proposed work has been simulated in the CloudSim toolkit package. The results have been compared with pre-existing scheduling algorithms with same experimental configuration. Important parameters like execution time, completion time, cost, makespan and utilization of resources are compared to measure the performance of the proposed algorithm. Extensive simulation results prove that introduced work has better results than existing approaches. 99.8% resource utilization has been achieved by CMeFCFS. Plott...
2019
Management of cloud computing resources is critical, especially when several cloudlets are submitted simultaneously to cloud computing. Therefore, it is very important to use high efficient cloudlet scheduling techniques to guarantee efficient utilization of computing resources. This paper presents a two-phase approach, called SAAC , for scheduling cloudlets onto Virtual Machines ( VMs ) of cloud computing environment to balance workload on the available VMs and minimize makespan (i.e., the completion time at the maximum loaded VM ). In the first phase, the SAAC approach applies the Simulated Annealing ( SA ) to find a near optimal scheduling of the cloudlets. While, in the second phase, the SAAC approach improves the cloudlets distribution by applying the Ant Colony Optimization ( ACO ) considering the solution obtained by the SA as the initial solution. The SAAC approach overcomes the computational time complexity of the ACO algorithm and low solutions quality of the SA . The prop...
International Journal on Cloud Computing: Services and Architecture, 2014
Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is minimized. The tasks are scheduled on machines in either space shared or time shared manner. We evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our approach to the statistics obtained when provisioning of resources was done in First-Cum-First-Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs minimizes significantly in different scenarios.
International Journal of Hybrid Intelligent Systems, 2017
Cloud computing is the ubiquitous on demand service that has brought remarkable revolution in the commercialization of High Performance Computing (HPC) [1]. Quality of Service (QoS) is the vital factor that always seeks high attention. Efficient Resource allocation and management techniques along with advance load balancing approaches make a bigger difference in terms of total system throughput. Several frameworks and algorithmic approaches are proposed in these areas to improve the throughput. Cloudlets are the tasks formed as the requirements of cloud users and submitted to the Local Queues (LQ) of Virtual Machines (VM) by the Datacenter Broker (DCB) to be processed. In this paper the main focus is given to this cloudlet scheduling policy which is nothing but an enhancement of the existing Improved Round Robin Cloudlet Scheduling Algorithm (IRRCSA) and Round Robin Algorithm (RRA). CloudSim 3.0.3 is used to implant the modelling and several parameters like Context Switching (CS), waiting Time (WT), Turnaround Time (TAT) are taken into account to highlight the QoS improvement in comparison with the IRRCSA and RRA approaches.
The research in cloud computing is gaining momentum; it has been accepted more and more widely by enterprises. This business model offers dynamic flexible resources to its users on pay-as-you-use basis. At the time of resource allocation, user may send request for multiple resources simultaneously, thus a provision is required for optimal allocation of resources. The aim here is that the provider should render the desired services to user and the user should have reliable and guaranteed services as per the service level agreement (SLA). This paper focuses on resource allocation problem which addresses the optimum use and assignment of resources for particular task. This work explores the current resource scheduling algorithms employed by cloud providers. In this review, the algorithms are divided according to their nature and categorized as dynamic scheduling algorithms, agent based scheduling algorithms and cost optimization based scheduling. Various algorithms falling in each category have been discussed and a comparison among them is being performed.
Cloud computing is the most recent computing paradigm, in the Information Technology where the resources and information are provided on-demand and accessed over the Internet. An essential factor in the cloud computing system is Task Scheduling that relates to the efficiency of the entire cloud computing environment. Mostly in a cloud environment, the issue of scheduling is to apportion the tasks of the requesting users to the available resources. This paper aims to offer a genetic based scheduling algorithm that reduces the waiting time of the overall system. However the tasks enter the cloud environment and the users have to wait until the resources are available that leads to more queue length and increased waiting time. This paper introduces a Task Scheduling algorithm based on genetic algorithm using a queuing model to minimize the waiting time and queue length of the system.
Cloud computing is a novel technology which aims to handles and provides online services to the consumers. In order to have an efficient cloud environment and to use resources properly,task scheduling is th e one of the issues which researchers attempt to propose applicable scheduling algorithms in this environment. Scheduling in distributed systems such as cloud computing and grid computing are considered to be an NP-complete. Hence, many heuristic and meta-heuristic algorithms are proposed to optimize the solutions.In this paper we survey several scheduling algorithms and issues related to the cloud computing.
2013
Cloud Computing (CC) is emerging as the next generation platform which would facilitate the user on pay as you use mode as per requirement. It provides a number of benefits which could not otherwise be realized. The primary aim of CC is to provide efficient access to remote and geographically distributed resources. A scheduling algorithm is needed to manage the access to the different resources. There are different types of resource scheduling technologies in CC environment. These are implemented at different levels based on different parameters like cost, performance, resource utilization, time, priority, physical distances, throughput, bandwidth, resource availability etc. In this research paper various types of resource allocation scheduling algorithms that provide efficient cloud services have been surveyed and analyzed. Based on the study of different algorithms, a classification of the scheduling algorithms on the basis of selected features has been presented. 2013 Elixir All ...
Cluster Computing
In a cloud environment, scheduling problem as an NP-complete problem can be solved using various metaheuristic algorithms. The metaheuristic algorithms are very popular for scheduling tasks because of their effectiveness. A bacterial foraging is a swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. This paper proposes a task scheduling algorithm based on bacterial foraging optimization to reduce the idle time of virtual machines whereas the load balancing and reducing of runtime have occurred. The Cloudsim toolkit has assessed the performance of the proposed method in comparison with some scheduling algorithms. According to the obtained results, the makespan and energy consumption were reduced by using the proposed algorithm.
Scientific & Academic Publishing, 2017
Cloud Computing is known for providing services to variety of users by with the aid of very large scalable and virtualized resources over the internet. Due to the recent innovative trends in this field, a number of scheduling algorithms have been developed in cloud computing which intend to decrease the cost of the services provided by the service provider in cloud computing environment. Most of the modern day researchers, attempt to construct job scheduling algorithms to increase the availability and performance of cloud services as the users have to pay for the available resources/services based on time. Considering all the above factors, scheduling plays a crucial role to maximize the utilization of resources in cloud computing environment. Through this paper, we are doing a comparative study of various scheduling algorithms and the related issues in cloud computing.
International journal of engineering research and technology, 2018
Cloud Computing is one of the emerging technology based upon on demand pay per use model. It is a platform where various services like applications, bandwidth and data are provided to its users using Internet. The main objective of Job Scheduling Algorithms in Cloud Computing is to optimize the resource allocation and utilization to meet user requirements and for cloud service providers it is the efficient use of resources and thus attaining maximum profit. All this leads us right to the requirement of Job Scheduling in Cloud Computing. Scheduling is the method of deciding how to provide resources amongst the various available tasks or processes so as to achieve the maximum throughput efficiently. In this paper various Job Scheduling algorithms have been presented on the basis of different parameters which provide efficient cloud services.
Cloud computing make possible to access application and data from anywhere so this has become new technology. cloud computing is a model that enable On-Demand access and charges under the basis of amount of resources consumed. Scheduling play a key role in cloud. Present study involve surveying of various task scheduling algorithm and resource allocation algorithm for cloud. By making comparison among various algorithm, we conclude that scheduling algorithm depends on type of task to be scheduled.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.