Cloud Computing - Unit-3
Cloud Computing - Unit-3
Public clouds are accessible over the Internet by any user who subscribes to the service, allowing wide accessibility and promoting standardization. They are managed by service providers. Private clouds, on the other hand, are built within an organization's intranet, offering limited access to its members and partners, allowing for higher efficiency and security but less standardization. They are managed by the client organization. Hybrid clouds combine elements of both public and private clouds, offering flexibility through resource sharing but requiring compromises in management. Hybrid clouds allow for external public cloud capacity to supplement private infrastructure .
Organizations may choose a private cloud over a public cloud due to its ability to offer higher customization and operational control, aligning more closely with specific security, privacy, and compliance requirements. Private clouds provide efficient and secure management of data and workloads within an organization. They allow for tailored solutions to meet unique business needs and maintain control over IT infrastructure, supporting internal innovation without reliance on external service providers .
A hybrid cloud model combines public cloud resources with private cloud infrastructure, offering improved flexibility and resource allocation. The primary benefit is the ability to scale resources dynamically and cost-effectively by supplementing local infrastructure with public cloud resources. However, this requires compromises in terms of resource sharing, integrating different cloud environments, and potentially posing challenges in security and data management due to the mixed use of public and private resources .
Interconnection networks are crucial for supporting application traffic, enabling effective data transfer between servers, and ensuring network expandability in data centers. Expandability is important to accommodate growing demands, allowing the integration of new nodes without significant reconfiguration. A robust network design ensures fault tolerance and graceful degradation, where the network can sustain functionality even when parts fail. Interconnection networks need to support increasing workloads and traffic seamlessly, maintaining performance as data centers scale .
The primary service models in cloud computing are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS provides the foundational resources such as databases, compute instances, and file storage. PaaS offers a platform that performs job launching, billing, and monitoring services. SaaS operates at the application layer, providing end-user interfaces for software applications. Each service model builds on the previous one, forming a layered approach to delivering cloud services .
The fat-tree switch network design offers several benefits, including fault tolerance, as it provides multiple paths between server nodes, allowing alternate routing if links fail. It is organized into two layers, with the lower layer for server nodes and edge switches, and the upper layer for aggregating switches. This design ensures that individual switch failures do not disrupt the entire network's connectivity, only affecting a few server nodes. However, its complexity can increase costs and may require more sophisticated management, potentially complicating initial construction and ongoing maintenance .
The fat-tree topology contributes to fault tolerance in data-center networks by offering multiple redundant paths between any two server nodes. This redundancy allows data to be routed through alternative paths if certain links fail, minimizing disruptions. The topology is organized into layers, with core switches providing multiple paths among different pods, ensuring continuity of connectivity despite failures. Moreover, isolated failures, like those of edge switches, only affect a limited number of nodes, preserving the overall network performance and reliability .
Traditional system-centric resource management strategies are inadequate for cloud environments due to their static resource allocation. Instead, a market-oriented resource management approach is favored, as it dynamically regulates supply and demand to achieve equilibrium. This approach involves economic incentives to balance resources, involving users/brokers who submit service requests and an SLA resource allocator that manages service delivery based on market-driven interactions. This setup supports the specific QoS parameters outlined in SLAs, ensuring resources are efficiently deployed according to user needs .
Virtualization is integral to cloud computing, enabling the creation of virtual machines (VMs) that abstract physical resources for service provisioning. It allows for the dynamic allocation of computing resources, optimizing data-center usage by creating virtual clusters that effectively manage workloads. Virtualization supports scalable and flexible infrastructure deployment, facilitating the delivery of Infrastructure as a Service (IaaS) by provisioning compute instances and storage dynamically based on demand .
In public clouds, workload management focuses on handling dynamic workloads without communication dependency, often offloading surge workloads. This enables scalability by avoiding capital expenses for users. Private clouds, meanwhile, allow for workload balancing within the same intranet, leveraging existing IT resources more efficiently. They provide control over resource deployment to meet specific organizational needs, including security and privacy policies, but may not offer the same cost efficiency for scalability as public clouds do .