Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013
…
9 pages
1 file
This paper presents an methodology to improve the context switch overhead in cloud computing. As migration of jobs in the cloud makes context switching overhead a challenging task which increases the communication costs of cloud and results in the low utilisation. To succeed this objective, dynamic context switching parallel algorithm is purposed in this paper. In order to enhance the utilisation and computing speeds of cloud an improved strategy is used which uses dynamic context switching parallel algorithm. Purposed methodology makes changes in the decision making process of the context switch parallel algorithm dynamically considerably reducing the context switch overheads. For execution of purposed approach, a cloud simulation environment is established which significantly reduces the waiting time of jobs during execution.
2014
The cloud computing paradigm is attracting an increased number of complex applications to run in remote data centers. Scheduling is an important issue in the cloud. The main goal of scheduling is distribute the load among processors and maximizing their utilization by minimizing the total task execution time and also maintaining the level of responsiveness of parallel jobs. Existing parallel scheduling mechanisms have some drawbacks such as context switching rate, large waiting times and large response time. The paper presents a comparative study on various scheduling algorithms used in the cloud. This papaer discusses three techniques backfilling, gang scheduling and migration and also propose a two tier architecture for workload consolidation.
Dynamic resource allocation problem is one of the most challenging problems in the resource management problems. The dynamic resource allocation in cloud computing has attracted attention of the research community in the last few years. Many researchers around the world have come up with new ways of facing this challenge. Ad-hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) cloud. Number of Cloud provider companies has started to include frameworks for parallel data processing in their product which making it easy for customers to access these services and to deploy their programs. The processing frameworks which are currently used have been designed for static and homogeneous cluster setups. So the allocated resources may be inadequate for large parts of the submitted tasks and unnecessarily increase processing cost and time. Again due to opaque nature of cloud, static allocation of resources is possible, but vice-versa in dynamic situations. The proposed new Generic data processing framework is intended to explicitly exploit the dynamic resource allocation in cloud for task scheduling and execution.
IOSR Journal of Computer Engineering, 2014
Cloud computing is an essential ingredient of modern computing systemsCloud computing provides an on demand service because it offers dynamic resource allocation for reliable and highly available services in pay as-you-consume manner to public. In Cloud computing environment multiple cloud users can request number of cloud services in parallel. So there must be a provision that all resources which are made available to requesting user in efficient manner to satisfy their need. In this Survey paper a review of various strategies for dynamic resource allocation in cloud computing is shown based on Linear Scheduling Strategy for Resource Allocation, Topology Aware Resource Allocation (TARA) and Dynamic Resource Allocation for Parallel Data Processing. Moreover limitations significance and advantages of using Resource Allocation in Cloud computing systems is also discussed.
2020
Today’s, cloud computing has placed itself in every field of the IT industry. It provides infrastructure, platform, and software as an amenity to users which are effortlessly available via the internet. It has a large number of users and has to deal with a large number of task executions which needs a suitable task scheduling algorithm. Virtualization and parallelism are the core components of cloud computing for optimization of resource utilization and to increase system performance. Virtualization is used to provide IT infrastructure on demand. To take more advantages of virtualization, parallel processing plays an important role to improve the system performance. But it is important to understand the mutual effect of parallel processing and virtualization on system performance. This research helps to better understand the effect of virtualization and parallel processing. In this paper, we studied the effect of parallelization and virtualization for the scheduling of tasks to opti...
SN Computer Science, 2020
In the present day scenario cloud computing is an attractive subject for IT and non IT personnel. It is a service-oriented pay per use computational model. Cloud has working models with service-oriented delivery mechanism as well as deployment-oriented infrastructure mechanism. Data centers are the backbone of cloud computing. The massive participation of public has also increased the load on the cloud servers. Proper scheduling of resources is always needed. Quality of service is to be provided as per the service level agreement. Virtualization technique is the main reason behind the huge success of cloud. Multi-cloud exchanges to optimize connectivity today, multi-cloud exchanges offer the next level in direct connectivity, allowing organizations to safely and easily expand multi-cloud capabilities. Exchanges eliminate the added worries that an open Internet can bring as well as the tedious provisioning and configuring that comes with connecting to the public Inter-net. Importantly, multi-cloud exchanges allow organizations to establish a single connection to multiple cloud providers at the same time through an Ethernet switching platform, rather than wrestling with multiple individual connections to cloud providers.
2015
The overall performance of cloud is influenced by the scheme adopted to balance the load among the Virtual Machines. An efficient way to handle both dependent and independent tasks is the need of the hour. The problem is to optimize cloud utilization by devising a strategy which handles task scheduling and load balancing effectively. Various algorithms exist for load balancing and scheduling in cloud. The existing algorithms are studied. An algorithm which includes parameters such as processing capabilities of Virtual Machines, current load on the Virtual Machines, job lengths and job interdependencies are considered to propose an algorithm which outperforms the other existing algorithms. Results prove that the proposed algorithm performs better than the existing ones in terms of execution time, number of tasks delayed before getting executed in a VM and the number of task migrations.
Infrastructure as a Service (IaaS) clouds have emerged as a promising new platform for massively parallel data processing. By eliminating the need for large upfront capital expenses, operators of IaaS clouds offer their customers the unprecedented possibility to acquire access to a highly scalable pool of computing resources on a short-term basis and enable them to execute data analysis applications at a scale which has been traditionally reserved to large Internet companies and research facilities. However, despite the growing popularity of these kinds of distributed applications, the current parallel data processing frameworks, which support the creation and execution of large-scale data analysis jobs, still stem from the era of dedicated, static compute clusters and have disregarded the particular characteristics of IaaS platforms so far. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today's IaaS clouds for both, task scheduling and execution. Particular tasks of processing a job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. However, the current algorithms does not consider the resource overload or underutilization during the job execution. In this paper, we have focused on increasing the efficacy of the scheduling algorithm for the real time Cloud Computing services. Our Algorithm utilizes the Turnaround time Utility efficiently by differentiating it into a gain function and a loss function for a single task. The algorithm also assigns high priority for task of early completion and less priority for abortions issues of real time tasks. The algorithm has been implemented on RR method. The out performs existing utility based scheduling algorithms and also compare its performance.
Because of rapid development of computer and network technologies, development and usage of parallel, distributed, and cloud computing have been widely used for the problem solution which requires the high performance computing power. Developments on parallel computing and their applications are summarized. Parallel computing and their applications are briefly explained. Research studies on distributed computing have been increased because of developments on network technologies and increased data transfer rate per unit time (network bandwidth). Research studies have been focused on cloud computing by using many computers recent years. In this study, parallel, distributed, and cloud computing are examined for various parameters.
Cloud computing is a developing computing technology that has tend to every entity in the digital organization, it can be personal or government sector. Taking into account the significance of cloud computing, finding new ideas in advancements of cloud computing services is an area of research field. With the initiation of the Cloud, deployment and hosting became easier and cheaper with the use of pay-per-use model offered by Cloud providers. Usually clouds have powerful data centers and data controller to handle large number of users. Cloud is a platform providing dynamic pool of resources and virtualization of services. To properly manage the resources of the service contributor, load balancing is required for the jobs that are submitted to the data center controller. Load balancing is a technique to cut up workload across many virtual processing unit in a server over the network to achieve least data processing time, optimal resource utilization and least average response time. There are various load balancing algorithms that are round robin, connection least, active monitoring, equally spread current execution and throttle. In the existing work, the throttle load balancing approach distributes the incoming jobs uniformly among virtual machines based on its states whether it is busy or available. In this, all the virtual machines have same configuration in term of processing of task. There is no difference whether the task is static or dynamic. To overcome these, a new work is proposed which is based on appropriate allocation of virtual machines in terms of static and dynamic tasks and to find out the optimum cost for user as well as service provider. The proposed model is implemented and tested on simulation toolkit (CloudAnalyst). Results validate the correctness of the framework and show a significant improvement over existing work.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2012 IEEE Fifth International Conference on Utility and Cloud Computing, 2012
Research Journal of Applied Sciences, Engineering and Technology, 2017
International Journal of Engineering Research and, 2016
International Journal of Electrical and Computer Engineering (IJECE), 2020
International Journal of Computer Applications
International Journal of Computer Applications, 2015
Vrajesh Sharma, 2019
International journal of computer and technology, 2018
International journal of engineering research and technology, 2017
Cluster Computing, 2020
Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication - ICUIMC '13, 2013
INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
CONTEXT-AWARE MODELING OF DYNAMIC RESOURCE MANAGEMENT IN CLOUD INFRASTRUCTURE, 2024