Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-dem... more Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT str...
Most reports of the decrease in age-adjusted coronary heart disease (CHD) are based on databases ... more Most reports of the decrease in age-adjusted coronary heart disease (CHD) are based on databases with upper age cut-offs that exclude approximately half of the events. We report changes in rates of acute myocardial infarction (AMI) and of out-of-hospital coronary death between 1986 and 1996 among New Jersey residents >15 years old. Data on patients discharged with the diagnosis of AMI from nonfederal acute care hospitals in the state (n ؍ 270,091) and all records in the New Jersey death registration files with CHD (n ؍ 172,175) listed as the cause of death from 1986 to 1996 (total study n ؍ 442,266) were analyzed. The rate of hospitalized AMI cases in the state remained essentially unchanged during these 11 years, whereas in-hospital and 30-day case fatality among all age groups and both sexes declined. Age-adjusted CHD rates showed a decrease in fatal events, a smaller decrease in total events, and a slight increase in nonfatal events. The proportion of fatal CHD events occurring out-of-hospital decreased especially among men. The median age at occurrence of events increased by 1 year. Despite a decrease in CHD mortality, the rate of nonfatal events increased, especially among persons >75 years old. Thus, the decrease in age-adjusted CHD mortality is not all due to treatment and true prevention of CHD, but the disease simply occurs at an older age. ᮊ2001 by Excerpta Medica, Inc.
Limited data are available on the effect of anemia on mortality in patients with acute myocardial... more Limited data are available on the effect of anemia on mortality in patients with acute myocardial infarction (MI). We examined the association of anemia with mortality at 1 year among 30,341 patients hospitalized with acute MI in 1986 (prethrombolytic era, n = 15,584) and 1996 (thrombolytic era, n = 14,757). The records were obtained from the Myocardial Infarction Data Acquisition System, a database of all patients with MI admitted to nonfederal hospitals in New Jersey. Anemia was present in 996 patients (6.4%) in 1986 and 1510 patients (10.2%, P <.0001) in 1996. In both years, patients with anemia were older, more frequently female and nonwhite, and more likely to have left ventricular dysfunction, non-Q MI and coronary artery bypass graft. In addition, in 1996, patients with anemia were more likely to undergo percutaneous transluminal coronary angioplasty and less likely to have a history of MI. One-year mortality was lower overall in 1996 compared with 1986 (1996 23.6%, 95% CI 22.9-24.3 vs 1986 24.9%, 95% CI 24.2-25.6, P =.0001). In both years, patients with anemia had significantly higher unadjusted risk for 1-year mortality (RR = 1.40, P =.0001 in both years). However, after controlling for demographics, left ventricular dysfunction, arrhythmias, Q versus non-Q MI, comorbid conditions, and revascularization procedures in a multivariable regression model, 1-year mortality in the anemia group was similar to the nonanemia group in both years. In the Myocardial Infarction Data Acquisition System database, anemia appears to have no significant direct effect on 1-year mortality. The higher unadjusted mortality observed among patients with acute MI and anemia is probably the result of older age, higher comorbidity, and more left ventricular dysfunction.
Limited data are available on the effect of anemia on mortality in patients with acute myocardial... more Limited data are available on the effect of anemia on mortality in patients with acute myocardial infarction (MI). We examined the association of anemia with mortality at 1 year among 30,341 patients hospitalized with acute MI in 1986 (prethrombolytic era, n = 15,584) and 1996 (thrombolytic era, n = 14,757). The records were obtained from the Myocardial Infarction Data Acquisition System, a database of all patients with MI admitted to nonfederal hospitals in New Jersey. Anemia was present in 996 patients (6.4%) in 1986 and 1510 patients (10.2%, P <.0001) in 1996. In both years, patients with anemia were older, more frequently female and nonwhite, and more likely to have left ventricular dysfunction, non-Q MI and coronary artery bypass graft. In addition, in 1996, patients with anemia were more likely to undergo percutaneous transluminal coronary angioplasty and less likely to have a history of MI. One-year mortality was lower overall in 1996 compared with 1986 (1996 23.6%, 95% CI 22.9-24.3 vs 1986 24.9%, 95% CI 24.2-25.6, P =.0001). In both years, patients with anemia had significantly higher unadjusted risk for 1-year mortality (RR = 1.40, P =.0001 in both years). However, after controlling for demographics, left ventricular dysfunction, arrhythmias, Q versus non-Q MI, comorbid conditions, and revascularization procedures in a multivariable regression model, 1-year mortality in the anemia group was similar to the nonanemia group in both years. In the Myocardial Infarction Data Acquisition System database, anemia appears to have no significant direct effect on 1-year mortality. The higher unadjusted mortality observed among patients with acute MI and anemia is probably the result of older age, higher comorbidity, and more left ventricular dysfunction.
Interest in Grid computing has grown significantly over the past five years. Management of distri... more Interest in Grid computing has grown significantly over the past five years. Management of distributed cluster resources is a key issue in Grid computing. Central to management of resources is the effectiveness of resource allocation, as it determines the overall utility of the system. In this paper, we propose a new Grid system that consists of Grid Federation Agents which couple together distributed cluster resources to enable a cooperative environment.
Cloud computing is a recent advancement wherein IT infrastructure and applications are provided a... more Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as 'services' to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. the potential to transform a large part of the IT industry, making software even more attractive as a service' .
Cloud computing aims to power the next generation data centers and enables application service pr... more Cloud computing aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Cloud applications have different composition, configuration, and deployment requirements. Quantifying the performance of resource allocation policies and application scheduling algorithms at finer details in Cloud computing environments for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is a challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: an extensible simulation toolkit that enables modelling and simulation of Cloud computing environments. The CloudSim toolkit supports modelling and creation of one or more virtual machines (VMs) on a simulated node of a Data Center, jobs, and their mapping to suitable VMs. It also allows simulation of multiple Data Centers to enable a study on federation and associated policies for migration of VMs for reliability and automatic scaling of applications.
This chapter describes Aneka-Federation, a decentralized and distributed system that combines ent... more This chapter describes Aneka-Federation, a decentralized and distributed system that combines enterprise Clouds, overlay networking, and structured peer-to-peer techniques to create scalable wide-area networking of compute nodes for high-throughput computing. The Aneka-Federation integrates numerous small scale Aneka Enterprise Cloud services and nodes that are distributed over multiple control and enterprise domains as parts of a single coordinated resource leasing abstraction. The system is designed with the aim of making distributed enterprise Cloud resource integration and application programming flexible, efficient, and scalable. The system is engineered such that it: enables seamless integration of existing Aneka Enterprise Clouds as part of single wide-area resource leasing federation; self-organizes the system components based on a structured peer-to-peer routing methodology; and presents end-users with a distributed application composition environment that can support variety of programming and execution models. This chapter describes the design and implementation of a novel, extensible and decentralized peer-to-peer technique that helps to discover, connect and provision the services of Aneka Enterprise Clouds among the users who can use different programming models to compose their applications. Evaluations of the system with applications that are programmed using the Task and Thread execution models on top of an overlay of Aneka Enterprise Clouds have been described here.
Research interest in Grid computing has grown significantly over the past five years. Management ... more Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to brokering in a Grid environment are non-coordinated since application-level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to sub-optimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called Grid-Federation, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the Quality of Service (QoS)-based scheduling, but also enhances utility delivered by resources. We show by simulation, while some users that are local to popular resources can experience higher cost and/or longer delays, the overall users' QoS demands across the federation are better met. Also, the federation's average case message-passing complexity is seen to be scalable, though some jobs in the system may lead to large numbers of messages before being scheduled. $ This is an extended version of paper that was published with Cluster'05,
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of Windows-based desktop computers. Addressing Windows-based enterprise grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for developing peer-to-peer or enterprise grid computing environments. This chapter introduces design requirements of enterprise grid systems and discusses various middleware technologies that meet them. It presents a .NET-based Grid framework, called Alchemi developed as part of the Gridbus project. Alchemi provides the runtime machinery and programming environment required to construct enterprise grids and develop grid applications. It allows flexible application composition by supporting an object-oriented application programming model in addition to a file-based job model. Crossplatform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated execution by grid nodes.
Research interest in Grid computing has grown significantly over the past five years. Management ... more Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to brokering in a Grid environment are non-coordinated since application-level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to sub-optimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called Grid-Federation, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the Quality of Service (QoS)-based scheduling, but also enhances utility delivered by resources. We show by simulation, while some users that are local to popular resources can experience higher cost and/or longer delays, the overall users' QoS demands across the federation are better met. Also, the federation's average case message-passing complexity is seen to be scalable, though some jobs in the system may lead to large numbers of messages before being scheduled. $ This is an extended version of paper that was published with Cluster'05,
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of Windows-based desktop computers. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi 1 , a .NETbased framework that provides the runtime machinery and programming environment required to construct enterprise/desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented application programming model in addition to a file-based job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.
Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing s... more Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involve searching for the appropriate resource types that match the user's application requirements. Various kinds of solutions to grid resource discovery have been suggested, including the centralised and hierarchical information server approach. However, both of these approaches have serious limitations in regards to scalability, fault-tolerance and network congestion. To overcome these limitations, indexing resource information using a decentralised (such as Peer-to-Peer (P2P)) network model has been actively proposed in the past few years.
The Service Level Agreement (SLA) based grid superscheduling approach promotes coordinated resour... more The Service Level Agreement (SLA) based grid superscheduling approach promotes coordinated resource sharing. Superscheduling is facilitated between administratively and topologically distributed grid sites via grid schedulers such as Resource brokers and workflow engines. In this work, we present a market-based SLA coordination mechanism, based on a well known contract net protocol.
Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing s... more Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involve searching for the appropriate resource types that match the user's application requirements. Various kinds of solutions to grid resource discovery have been suggested, including the centralised and hierarchical information server approach. However, both of these approaches have serious limitations in regards to scalability, fault-tolerance and network congestion. To overcome these limitations, indexing resource information using a decentralised (such as Peer-to-Peer (P2P)) network model has been actively proposed in the past few years.
Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalabl... more Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services. The simulation framework has the following novel features: (i) support for modelling and instantiation of large scale Cloud computing infrastructure, including data centers on a single physical computing node and java virtual machine; (ii) a self-contained platform for modelling data centers, service brokers, scheduling, and allocations policies; (iii) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (iv) flexibility to switch between space-shared and time-shared allocation of processing cores to virtualized services.
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of desktop computers i.e. those running variants of the Microsoft Windows operating system. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi, a .NET-based grid computing framework that provides the runtime machinery and programming environment required to construct desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented grid application programming model in addition to a grid job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.
Cloud computing providers have setup several data centers at different geographical locations ove... more Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
In the current approaches to workflow scheduling, there is no cooperation between the distributed... more In the current approaches to workflow scheduling, there is no cooperation between the distributed workflow brokers and as a result, the problem of conflicting schedules occur. To overcome this problem, in this paper, we propose a decentralized and cooperative workflow scheduling algorithm. The proposed approach utilizes a Peer-to-Peer (P2P) coordination space with respect to coordinating the application schedules among the Grid wide distributed workflow brokers. The proposed algorithm is completely decentralized in the sense that there is no central point of contact in the system, the responsibility of the key functionalities such as resource discovery and scheduling coordination are delegated to the P2P coordination space. With the implementation of our approach, not only the performance bottlenecks are likely to be eliminated but also efficient scheduling with enhanced scalability and better autonomy for the users are likely to be achieved. We prove the feasibility of our approach through an extensive trace driven simulation study.
Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-dem... more Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT str...
Most reports of the decrease in age-adjusted coronary heart disease (CHD) are based on databases ... more Most reports of the decrease in age-adjusted coronary heart disease (CHD) are based on databases with upper age cut-offs that exclude approximately half of the events. We report changes in rates of acute myocardial infarction (AMI) and of out-of-hospital coronary death between 1986 and 1996 among New Jersey residents >15 years old. Data on patients discharged with the diagnosis of AMI from nonfederal acute care hospitals in the state (n ؍ 270,091) and all records in the New Jersey death registration files with CHD (n ؍ 172,175) listed as the cause of death from 1986 to 1996 (total study n ؍ 442,266) were analyzed. The rate of hospitalized AMI cases in the state remained essentially unchanged during these 11 years, whereas in-hospital and 30-day case fatality among all age groups and both sexes declined. Age-adjusted CHD rates showed a decrease in fatal events, a smaller decrease in total events, and a slight increase in nonfatal events. The proportion of fatal CHD events occurring out-of-hospital decreased especially among men. The median age at occurrence of events increased by 1 year. Despite a decrease in CHD mortality, the rate of nonfatal events increased, especially among persons >75 years old. Thus, the decrease in age-adjusted CHD mortality is not all due to treatment and true prevention of CHD, but the disease simply occurs at an older age. ᮊ2001 by Excerpta Medica, Inc.
Limited data are available on the effect of anemia on mortality in patients with acute myocardial... more Limited data are available on the effect of anemia on mortality in patients with acute myocardial infarction (MI). We examined the association of anemia with mortality at 1 year among 30,341 patients hospitalized with acute MI in 1986 (prethrombolytic era, n = 15,584) and 1996 (thrombolytic era, n = 14,757). The records were obtained from the Myocardial Infarction Data Acquisition System, a database of all patients with MI admitted to nonfederal hospitals in New Jersey. Anemia was present in 996 patients (6.4%) in 1986 and 1510 patients (10.2%, P <.0001) in 1996. In both years, patients with anemia were older, more frequently female and nonwhite, and more likely to have left ventricular dysfunction, non-Q MI and coronary artery bypass graft. In addition, in 1996, patients with anemia were more likely to undergo percutaneous transluminal coronary angioplasty and less likely to have a history of MI. One-year mortality was lower overall in 1996 compared with 1986 (1996 23.6%, 95% CI 22.9-24.3 vs 1986 24.9%, 95% CI 24.2-25.6, P =.0001). In both years, patients with anemia had significantly higher unadjusted risk for 1-year mortality (RR = 1.40, P =.0001 in both years). However, after controlling for demographics, left ventricular dysfunction, arrhythmias, Q versus non-Q MI, comorbid conditions, and revascularization procedures in a multivariable regression model, 1-year mortality in the anemia group was similar to the nonanemia group in both years. In the Myocardial Infarction Data Acquisition System database, anemia appears to have no significant direct effect on 1-year mortality. The higher unadjusted mortality observed among patients with acute MI and anemia is probably the result of older age, higher comorbidity, and more left ventricular dysfunction.
Limited data are available on the effect of anemia on mortality in patients with acute myocardial... more Limited data are available on the effect of anemia on mortality in patients with acute myocardial infarction (MI). We examined the association of anemia with mortality at 1 year among 30,341 patients hospitalized with acute MI in 1986 (prethrombolytic era, n = 15,584) and 1996 (thrombolytic era, n = 14,757). The records were obtained from the Myocardial Infarction Data Acquisition System, a database of all patients with MI admitted to nonfederal hospitals in New Jersey. Anemia was present in 996 patients (6.4%) in 1986 and 1510 patients (10.2%, P <.0001) in 1996. In both years, patients with anemia were older, more frequently female and nonwhite, and more likely to have left ventricular dysfunction, non-Q MI and coronary artery bypass graft. In addition, in 1996, patients with anemia were more likely to undergo percutaneous transluminal coronary angioplasty and less likely to have a history of MI. One-year mortality was lower overall in 1996 compared with 1986 (1996 23.6%, 95% CI 22.9-24.3 vs 1986 24.9%, 95% CI 24.2-25.6, P =.0001). In both years, patients with anemia had significantly higher unadjusted risk for 1-year mortality (RR = 1.40, P =.0001 in both years). However, after controlling for demographics, left ventricular dysfunction, arrhythmias, Q versus non-Q MI, comorbid conditions, and revascularization procedures in a multivariable regression model, 1-year mortality in the anemia group was similar to the nonanemia group in both years. In the Myocardial Infarction Data Acquisition System database, anemia appears to have no significant direct effect on 1-year mortality. The higher unadjusted mortality observed among patients with acute MI and anemia is probably the result of older age, higher comorbidity, and more left ventricular dysfunction.
Interest in Grid computing has grown significantly over the past five years. Management of distri... more Interest in Grid computing has grown significantly over the past five years. Management of distributed cluster resources is a key issue in Grid computing. Central to management of resources is the effectiveness of resource allocation, as it determines the overall utility of the system. In this paper, we propose a new Grid system that consists of Grid Federation Agents which couple together distributed cluster resources to enable a cooperative environment.
Cloud computing is a recent advancement wherein IT infrastructure and applications are provided a... more Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as 'services' to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. the potential to transform a large part of the IT industry, making software even more attractive as a service' .
Cloud computing aims to power the next generation data centers and enables application service pr... more Cloud computing aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Cloud applications have different composition, configuration, and deployment requirements. Quantifying the performance of resource allocation policies and application scheduling algorithms at finer details in Cloud computing environments for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is a challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: an extensible simulation toolkit that enables modelling and simulation of Cloud computing environments. The CloudSim toolkit supports modelling and creation of one or more virtual machines (VMs) on a simulated node of a Data Center, jobs, and their mapping to suitable VMs. It also allows simulation of multiple Data Centers to enable a study on federation and associated policies for migration of VMs for reliability and automatic scaling of applications.
This chapter describes Aneka-Federation, a decentralized and distributed system that combines ent... more This chapter describes Aneka-Federation, a decentralized and distributed system that combines enterprise Clouds, overlay networking, and structured peer-to-peer techniques to create scalable wide-area networking of compute nodes for high-throughput computing. The Aneka-Federation integrates numerous small scale Aneka Enterprise Cloud services and nodes that are distributed over multiple control and enterprise domains as parts of a single coordinated resource leasing abstraction. The system is designed with the aim of making distributed enterprise Cloud resource integration and application programming flexible, efficient, and scalable. The system is engineered such that it: enables seamless integration of existing Aneka Enterprise Clouds as part of single wide-area resource leasing federation; self-organizes the system components based on a structured peer-to-peer routing methodology; and presents end-users with a distributed application composition environment that can support variety of programming and execution models. This chapter describes the design and implementation of a novel, extensible and decentralized peer-to-peer technique that helps to discover, connect and provision the services of Aneka Enterprise Clouds among the users who can use different programming models to compose their applications. Evaluations of the system with applications that are programmed using the Task and Thread execution models on top of an overlay of Aneka Enterprise Clouds have been described here.
Research interest in Grid computing has grown significantly over the past five years. Management ... more Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to brokering in a Grid environment are non-coordinated since application-level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to sub-optimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called Grid-Federation, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the Quality of Service (QoS)-based scheduling, but also enhances utility delivered by resources. We show by simulation, while some users that are local to popular resources can experience higher cost and/or longer delays, the overall users' QoS demands across the federation are better met. Also, the federation's average case message-passing complexity is seen to be scalable, though some jobs in the system may lead to large numbers of messages before being scheduled. $ This is an extended version of paper that was published with Cluster'05,
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of Windows-based desktop computers. Addressing Windows-based enterprise grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for developing peer-to-peer or enterprise grid computing environments. This chapter introduces design requirements of enterprise grid systems and discusses various middleware technologies that meet them. It presents a .NET-based Grid framework, called Alchemi developed as part of the Gridbus project. Alchemi provides the runtime machinery and programming environment required to construct enterprise grids and develop grid applications. It allows flexible application composition by supporting an object-oriented application programming model in addition to a file-based job model. Crossplatform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated execution by grid nodes.
Research interest in Grid computing has grown significantly over the past five years. Management ... more Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to brokering in a Grid environment are non-coordinated since application-level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to sub-optimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called Grid-Federation, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the Quality of Service (QoS)-based scheduling, but also enhances utility delivered by resources. We show by simulation, while some users that are local to popular resources can experience higher cost and/or longer delays, the overall users' QoS demands across the federation are better met. Also, the federation's average case message-passing complexity is seen to be scalable, though some jobs in the system may lead to large numbers of messages before being scheduled. $ This is an extended version of paper that was published with Cluster'05,
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of Windows-based desktop computers. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi 1 , a .NETbased framework that provides the runtime machinery and programming environment required to construct enterprise/desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented application programming model in addition to a file-based job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.
Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing s... more Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involve searching for the appropriate resource types that match the user's application requirements. Various kinds of solutions to grid resource discovery have been suggested, including the centralised and hierarchical information server approach. However, both of these approaches have serious limitations in regards to scalability, fault-tolerance and network congestion. To overcome these limitations, indexing resource information using a decentralised (such as Peer-to-Peer (P2P)) network model has been actively proposed in the past few years.
The Service Level Agreement (SLA) based grid superscheduling approach promotes coordinated resour... more The Service Level Agreement (SLA) based grid superscheduling approach promotes coordinated resource sharing. Superscheduling is facilitated between administratively and topologically distributed grid sites via grid schedulers such as Resource brokers and workflow engines. In this work, we present a market-based SLA coordination mechanism, based on a well known contract net protocol.
Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing s... more Efficient Resource discovery mechanism is one of the fundamental requirement for Grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involve searching for the appropriate resource types that match the user's application requirements. Various kinds of solutions to grid resource discovery have been suggested, including the centralised and hierarchical information server approach. However, both of these approaches have serious limitations in regards to scalability, fault-tolerance and network congestion. To overcome these limitations, indexing resource information using a decentralised (such as Peer-to-Peer (P2P)) network model has been actively proposed in the past few years.
Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalabl... more Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services. The simulation framework has the following novel features: (i) support for modelling and instantiation of large scale Cloud computing infrastructure, including data centers on a single physical computing node and java virtual machine; (ii) a self-contained platform for modelling data centers, service brokers, scheduling, and allocations policies; (iii) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (iv) flexibility to switch between space-shared and time-shared allocation of processing cores to virtualized services.
Computational grids that couple geographically distributed resources are becoming the de-facto co... more Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of desktop computers i.e. those running variants of the Microsoft Windows operating system. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi, a .NET-based grid computing framework that provides the runtime machinery and programming environment required to construct desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented grid application programming model in addition to a grid job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.
Cloud computing providers have setup several data centers at different geographical locations ove... more Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.
In the current approaches to workflow scheduling, there is no cooperation between the distributed... more In the current approaches to workflow scheduling, there is no cooperation between the distributed workflow brokers and as a result, the problem of conflicting schedules occur. To overcome this problem, in this paper, we propose a decentralized and cooperative workflow scheduling algorithm. The proposed approach utilizes a Peer-to-Peer (P2P) coordination space with respect to coordinating the application schedules among the Grid wide distributed workflow brokers. The proposed algorithm is completely decentralized in the sense that there is no central point of contact in the system, the responsibility of the key functionalities such as resource discovery and scheduling coordination are delegated to the P2P coordination space. With the implementation of our approach, not only the performance bottlenecks are likely to be eliminated but also efficient scheduling with enhanced scalability and better autonomy for the users are likely to be achieved. We prove the feasibility of our approach through an extensive trace driven simulation study.
Uploads
Papers by Rajiv Ranjan