Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
6 pages
2 files
In a multi-cloud environment, distributed application may need to have minimal inter-node latency to improve their performance. In this paper, We propose a model for the grouping of nodes with respect to network latency. The application scheduling is done on the basis of network latency. This model is a part of our proposed Cloud Scheduler module, which helps the scheduler in scheduling decisions on the basis of different criteria. Network latency and resultant node grouping on the basis of this latency is one of those criteria. The main essence of the paper is that our proposed latency grouping algorithm not only have no additional network traffic for algorithm computation but also works well with incomplete latency information and performs intelligent grouping on the basis of latency. This paper addresses an important problem in cloud computing, which is locating communicating virtual machines for minimum latency between them and group them with respect to inter-node latency.
2014
• Proposed a model for the grouping of nodes with respect to network latency. • It helps the cloud scheduler in scheduling decisions on the basis of latency.
International Journal of Electrical and Computer Engineering (IJECE), 2021
Cloud computing is an emerging distributed computing paradigm. However, it requires certain initiatives that need to be tailored for the cloud environment such as the provision of an on-the-fly mechanism for providing resource availability based on the rapidly changing demands of the customers. Although, resource allocation is an important problem and has been widely studied, there are certain criteria that need to be considered. These criteria include meeting user's quality of service (QoS) requirements. High QoS can be guaranteed only if resources are allocated in an optimal manner. This paper proposes a latency-aware max-min algorithm (LAM) for allocation of resources in cloud infrastructures. The proposed algorithm was designed to address challenges associated with resource allocation such as variations in user demands and on-demand access to unlimited resources. It is capable of allocating resources in a cloud-based environment with the target of enhancing infrastructure-level performance and maximization of profits with the optimum allocation of resources. A priority value is also associated with each user, which is calculated by analytic hierarchy process (AHP). The results validate the superiority for LAM due to better performance in comparison to other state-of-the-art algorithms with flexibility in resource allocation for fluctuating resource demand patterns.
Cloud computing refers to Internet based development and utilization of computer technology, and hence, cloud computing can be described as a model of Internet-based computing. Scheduling is a critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud computing system. So scheduling is the major issue in establishing Cloud computing systems. The main goal of scheduling is to maximize the resource utilization and minimize processing time of the tasks. In this thesis, an efficient task-grouping based approach has been proposed for task scheduling in computational cloud. Proposed work is grouping the tasks before resource allocation according to resource capacity to reduce the communication overhead. Cloud Resources are heterogeneous in nature, owned and managed by different organizations with different allocation policies. In our scheduling algorithm tasks are scheduled based on resources computational and communication capabilities. Here tasks are ...
ACM SIGCOMM Computer Communication Review, 2015
Given a set of datacenters and groups of application clients, well-connected datacenters can be rented as traffic proxies to reduce client latency. Rental costs must be minimized while meeting the application specific latency needs. Here, we formally define the Cooperative Group Provisioning problem and show it is NP-hard to approximate within a constant factor. We introduce a novel greedy approach and demonstrate its promise through extensive simulation using real cloud network topology measurements and realistic client churn. We find that multi-cloud deployments dramatically increase the likelihood of meeting group latency thresholds with minimal cost increase compared to single-cloud deployments.
International Journal of Information Technology and Computer Science, 2014
Cloud computing is a large model change of computing system. It provides high scalability and flexibility among an assortment of on-demand services. To imporve the performance of the multi-cloud environment in distributed application might require less energy efficiency and minimal inter-node latency correspondingly. The major problem is that the energy efficiency of the cloud computing data center is less if the number of server is low, else it increases. To overcome the energy efficiency and network latency problem a novel energy-efficient particle swarm optimization representation for multi-job scheduling and Latency representation for the grouping of nodes with respect to network latency is proposed. The scheduling procedure is through on the basis of network latency and energy efficiency. Scheduling schema is the main part of Cloud Scheduler component, which helps the scheduler in scheduling decision on the base of dissimilar criterion. It also works well with incomplete latency information and performs intelligent grouping on the basis of both network latency and energy efficiency. Design a realistic particle swarm optimization algorithm for the cloud servers and construct an overall energy competence based on the purpose of the servers and calculation of fitness value for each cloud servers. Also, in order to speed up the convergent speed and improve the probing aptitude of our algorithm, a local search operative is introduced. Finally, the experiment demonstrates that the proposed algorithm is effectual and well-organized.
In recently years, the information communication technology (ICT) appeared new paradigm of utility computing called cloud computing. The consumer cloud is always important of high performance for cloud computing service and satisfy service agree level (SLA). In cloud computing, there is a need of further improvement in task scheduling algorithm to group of tasks, which will reduce the response time and enhance computing resource utilization. This grouping strategy considers the processing capacity, memory size and service type requirement of each task to realize the optimization for cloud computing environment. It also improves computation/communication ratio and utilization of available resources by grouping the user tasks before resource allocation. The experimental results were conducted in a simulation cloud computing environment by generator services and tasks request for consumer cloud. The results show that gives comparator between our strategies and improve activity based costing algorithm.
IOSR Journal of Computer Engineering, 2012
Organization today are leveraging the benefits of cloud computing to increase flexibility, agility and reduce cost however that flexibility can also pose networking challenges by moving application offsite, companies need good network connectivity between a data center site and a cloud provider so that user don't experience performance degradation. Good connectivity comes in two forms necessary bandwidth and low latency. Distributed datacenter improves services access latency and bandwidth. Virtualized cloud data center enables IT organization to share compute resources across multiple applications and user group in a much more dynamic way than is possible in traditional environment where application, middleware and infrastructure are tightly coupled and resource allocation are highly static. The goal is to enable users to reduce the cost and complexity of application provisioning and operations in virtualized data centers. Cloud environments at the same by automation liberate the operational management from the burden of manual process. I.
IEEE Transactions on Services Computing, 2017
Cloud systems empower the big data management by providing virtual machines (VMs) to process data nodes (DNs) in a faster, cheaper and more effective way. The efficiency of a VM allocation is an important concern that is influenced by the communication latencies. In the literature, it has been proved that the VM assignment minimizing communication latency in the presence of the triangle inequality is 2-approximation. However, a 2-approximation solution is not efficient enough as data center networks are not limited to the triangle inequality. In this paper, we define the quadrilateral inequality property for latencies such that the time complexity of the VM assignment problem minimizing communication latency in the presence of the quadrilateral inequality is in P (polynomial) class. Indeed, we propose an algorithm for the problem of assigning VMs to DNs to minimize the maximum latency among allocated VMs in addition to DNs with their assigned VMs. This algorithm is latency optimal and 2-approximation for networks with the quadrilateral inequality and the triangle inequality, respectively. Besides, the extension of the proposed method can be applied to the cloud elasticity. The simulation results illustrate the good performance and scalability of our method in various known data center networks.
The Journal of Supercomputing, 2015
Geo-distributed data-centers in cloud computing are becoming increasingly popular due to their lower end-user perceived latency and increased reliability in distributed applications. The important challenge of resource allocation in cloud management is more pronounced in geo-distributed data-centers compared to traditional data-centers. A geo-distributed cloud manager faces applications whose virtual machines (VMs) are far apart and need to interact with end users, access distributed data and communicate with each other. In such applications, the service level agreement is not met if the communication latency is not bounded. In this paper, we focus on the problem of finding data-centers for hosting VMs while the requested VMs are located in different geo-distributed data-centers and are sensitive to the communication latency. We propose an algorithm to minimize communication latency by taking into account the cloud network topology to be either a tree or that of the Internet. Moreover, our algorithm can utilize user's locations to find better candidate solutions. In the case of tree topology, we prove that our algorithm finds a solution whose latency is minimum. In addition, we show that our algorithm performs well in the Internet topology, with simulation results indicating that it can reduce the communication latency up to 92 % compared to existing algorithms.
IGI Global eBooks, 2011
Latency-sensitive and data-intensive applications, such as IoT or mobile services, are leveraged by Edge computing, which extends the cloud ecosystem with distributed computational resources in proximity to data providers and consumers. This brings significant benefits in terms of lower latency and higher bandwidth. However, by definition, edge computing has limited resources with respect to cloud counterparts; thus, there exists a trade-off between proximity to users and resource utilization. Moreover, service availability is a significant concern at the edge of the network, where extensive support systems as in cloud data centers are not usually present. To overcome these limitations, we propose a scorebased edge service scheduling algorithm that evaluates network, compute, and reliability capabilities of edge nodes. The algorithm outputs the maximum All authors have contributed equally and are listed alphabetically.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
Journal of Telecommunications and Information Technology
Proceedings of the 2nd Workshop on Flexible Resource and Application Management on the Edge
Journal of Systems and Software, 2021
29th International Conference on Real-Time Networks and Systems
Vrajesh Sharma, 2019
Concurrency and Computation: Practice and Experience, 2013
IEEE Transactions on Mobile Computing
IAEME PUBLICATION, 2018
International Journal of Innovative Technology and Exploring Engineering, 2019