Windows Clustering Technologies-An Overview: Published: November 2001
Windows Clustering Technologies-An Overview: Published: November 2001
Abstract This article is written for IT managers and examines the cluster technologies available on the Microsoft Windows server operating s stem! Also discussed is how cluster technologies can be architected to create comprehensive" mission#critical solutions that meet the re$uirements of the enterprise!
The information contained in this document re resents the current view of Microsoft !or oration on the issues discussed as of the date of ublication" #ecause Microsoft must res ond to chan$in$ mar%et conditions& it should not be inter reted to be a commitment on the art of Microsoft& and Microsoft cannot $uarantee the accurac' of an' information resented after the date of ublication" This document is for informational ur oses onl'" M(!)*S*+T MA,-S N* WA))ANT(-S& -.P)-SS *) (MP/(-0& (N T1(S 0*!2M-NT" !om l'in$ with all a licable co 'ri$ht laws is the res onsibilit' of the user" Without limitin$ the ri$hts under co 'ri$ht& no art of this document ma' be re roduced& stored in or introduced into a retrieval s'stem& or transmitted in an' form or b' an' means 3electronic& mechanical& hotoco 'in$& recordin$& or otherwise4& or for an' ur ose& without the e5 ress written ermission of Microsoft !or oration" Microsoft ma' have atents& atent a lications& trademar%s& co 'ri$hts& or other intellectual ro ert' ri$hts coverin$ sub6ect matter in this document" -5ce t as e5 ressl' rovided in an' written license a$reement from Microsoft& the furnishin$ of this document does not $ive 'ou an' license to these atents& trademar%s& co 'ri$hts& or other intellectual ro ert'" 7 2001 Microsoft !or oration" All ri$hts reserved" Microsoft& Windows& and Windows NT are either re$istered trademar%s or trademar%s of Microsoft !or oration in the 2nited States and8or other countries" *ther roduct and com an' names mentioned herein ma' be the trademar%s of their res ective owners" Microsoft !or oration 9 *ne Microsoft Wa' 9 )edmond& WA :;0<2=>?:: 9 2SA
Contents
Contents........................................................................................................................................ 3 Acknowledgements..................................................................................................................... iv Introduction .................................................................................................................................. 1 Cluster Architecture Essentials................................................................................................... 3 Server Cluster Architecture....................................................................................................... 13 Network Load Balancing Architecture......................................................................................1 Com!onent Load Balancing Architecture................................................................................"1 Summar#...................................................................................................................................... "$ %elated Links............................................................................................................................... "&
Acknowledgements
Mano% &a ar" 'roduct Manager" Microsoft Corporation (reg )an*ich" 'roduct Manager" Microsoft Corporation Michael +essler" Technical ,ditor" Microsoft Corporation
iv
Introduction
Microsoft listened to customers and has wor*ed steadil to improve the underl ing technolog architecture of the Windows operating s stem!
Clustering )echnologies-*ur!oses and %e.uirements ,ach technolog has a specific purpose and is designed to meet different re$uirements! Network Load Balancing is designed to address bottlenec*s caused b front#end Web services! Com!onent Load Balancing is designed to address the uni$ue scalabilit and availabilit needs of middle#tier applications!
Server Cluster is designed to maintain data integrit and provide failover support!
Organi5ations can use Microsoft cluster technologies to increase overall availabilit " while minimi5ing single points of failure and reducing costs b using industr #standard hardware and software! E1Commerce Scenario The clustering technologies outlined above can be /and t picall are0 combined to architect a comprehensive service offering! The most common scenario where all three solutions are combined is an e# commerce site where front#end Web servers use &34" middle#tier application servers use C34" and bac*# end database servers use 1erver Cluster! These technologies alone are not enough to ensure the highest levels of availabilit ! To ensure the highest availabilit for line#of#business applications and mission#critical services" organi5ations must ta*e an end#to# end service approach to operations and availabilit ! This means treating the entire service offering as a whole and designing" implementing" and operating the service solution following industr #standard best practices" such as those used b Microsoft ,nterprise 1ervices!
)o!ics Covered
The main topics covered in the rest of this article include2 Cluster Architecture ,ssentials 1erver Cluster Architecture &etwor* 3oad 4alancing Architecture Component 3oad 4alancing Architecture
>
Limitations While a well#designed solution can guard against application failure" s stem failure and site failure" cluster technologies do have limitations! Cluster technologies depend on compatible applications and services to operate properl ! The software must respond appropriatel when failure occurs! Cluster technolog cannot protect against failures caused b viruses" software corruption or human error! To protect against these t pes of problems" organi5ations need solid data protection and recover plans!
Cluster 0rgani3ation
Clusters are organi5ed in loosel coupled groups that are often referred to as farms or pac*s! In most cases" as shown in ?igure 9 below" front#end and middle#tiers services are organi5ed as farms using clones" while bac*#end and critical support services such as component routing" are organi5ed as pac*s! I) Sta(( Considerations As IT staff architect clustered solutions" the need to loo* carefull at the cluster organi5ation the plan to use! The goal should be to organi5e servers according to the wa the servers will be used and the applications the will be running! T picall " Web servers" application servers and database servers are all organi5ed differentl !
+i$ure 1" !lusters are or$ani@ed as farms or ac%s" Cluster +arm A farm is a group of servers that run similar services" but don:t t picall share data! The are called a farm because the handle whatever re$uests are passed out to them using identical copies of data that is stored locall ! 4ecause the use identical copies of data /rather than sharing data0" members of a farm operate autonomousl and are also referred to as clones! ?ront#end Web servers running Internet Information 1ervices /II10 and using &34 are an example of a farm! With a Web farm" identical data is replicated to all servers in the farm" and each server can handle an re$uest that comes to it using local copies of the data! 4ecause the servers are identical and the data is replicated to all the servers in the Web farm" the servers are also referred to as clones!
,xampleA 3oad 4alanced Web ?arm In a load balanced Web farm with ten servers" ou could have2 Clone 9Web server using local data Clone -Web server using local data Clone >Web server using local data Clone @Web server using local data Clone AWeb server using local data Clone BWeb server using local data Clone CWeb server using local data Clone DWeb server using local data Clone EWeb server using local data Clone 9.Web server using local data
Cluster *ack A pac* is a group of servers that operate together and share partitioned data! The are called a pac* because the wor* together to manage and maintain services! 4ecause members of a pac* share access to partitioned data" the have uni$ue operations modes and usuall access the shared data on dis* drives to which all members of the pac* are connected! ,xampleA @#node 1F3 1erver Cluster 'ac* An example of a pac* is a database 1erver Cluster running 1F3 1erver -... and a server cluster with partitioned database views! Members of the pac* share access to the data and have a uni$ue chun* of data or logic that the handle" rather than handling all data re$uests! In a @#node 1F3 1erver cluster2 7atabase 1erver 9 ma handle accounts that begin with A#?! 7atabase 1erver - ma handle accounts that begin with (#M! 7atabase 1erver > ma handle accounts that begin with ! 7atabase 1erver @ ma handle accounts that begin with T#G!
Combining Techni$ues A 3arge#scale ,#Commerce 1ite 1ervers in a tier can be organi5ed using a combination of the above techni$ues as well! An example of this combination is a large#scale e#commerce site that has middle tier application servers running Application Center -... and C34! To configure C34" two clusters are recommended! The Component )outing Cluster handles the message routing between the front#end Web servers and the application servers!
The Application 1erver Cluster activates and runs the components installed on the application servers! While the component routing cluster could be configured on the Web tier without needing additional servers" a large e#commerce site ma want the high availabilit benefits of a separate cluster! In this case" the routing would ta*e place on separate servers that are clustered using 1erver Cluster! The application servers would then be clustered using C34!
In(rastructure Scaling
With proper architecture" the servers in a particular tier can be scaled out or up as necessar to meet growing performance and throughput needs! ?igure - below provides an overview of the scalabilit of Windows clustering technologies! I) Sta(( Considerations As IT staff loo* at scalabilit re$uirements" the must alwa s address the real business needs of the organi5ation! The goal should be to select the right edition of the Windows operating s stem to meet the current and future needs of the pro%ect! The number of servers needed depends on the anticipated server load" and the si5e and t pes of re$uests the servers will handle! 'rocessors and memor should be si5ed appropriatel for the applications and services the servers will be running" as well as the number of simultaneous user connections!
+i$ure 2" Windows clusterin$ technolo$ies can be scaled to meet business reAuirements" Scaling b# Adding Servers When loo*ing to scale out b adding servers to the cluster" the clustering technolog and the server operating s stem used are both important! As Table 9 below shows" the *e difference in the outward
scaling capabilities of Advanced 1erver and 7atacenter 1erver is the number of nodes that can be used with 1erver Cluster! 6nder Windows -..." the maximum number of 1erver Cluster nodes is four! 6nder Windows !&,T" the maximum number of 1erver Cluster nodes is eight!
)able 1. Cluster Nodes Su!!orted b# 0!erating S#stem and )echnolog#. 0!erating S#stem Edition Network Load Balancing Com!onent Load Balancing Server Cluster
/indows "444 Advanced 1erver 7atacenter 1erver /indows .NE) Advanced 1erver 7atacenter 1erver Scaling b# Adding C*5s and %A' When loo*ing to scale up b adding C'6s and )AM" the edition of the server operating s stem used is extremel important! In terms of both processor and memor capacit " 7atacenter 1erver is much more expandable! Advanced 1erver supports up to eight processors and eight gigab tes /(40 of )AM! 7atacenter 1erver supports up to >- processors and B@ (4 of )AM! >>D D @ D >>D D @
Thus" organi5ations t picall scale up from Advanced 1erver to 7atacenter 1erver as their needs change over time!
handle failover must be si5ed to handle the wor*load of the failed and the current wor*load /if an 0! Additionall " both average and pea* wor*loads must be considered! 1evers need additional capacit to handle pea* loads! Server Cluster Nodes 1erver Cluster nodes can be either active or passive! Active Node! When a node is active" it is activel handling re$uests! *assive Node! When a node is passive" it is idle" on standb waiting for another node to fail!
Multi#node clusters can be configured using different combinations of active and passive nodes! Architecting 'ulti1node Clusters When architecting multi#node clusters" the decision as to whether nodes are configured as active or passive is extremel important! To understand wh " consider the following2 I( an active node (ails and there is a !assive node available " applications and services running on the failed node can be transferred to the passive node! 1ince the passive node has no current wor*load" the server should be able to assume the wor*load of the other server without an problems /providing all servers have the same hardware configuration0! I( all severs in a cluster are active and a node (ails " the applications and services running on the failed node can be transferred to another active node! 1ince the server is alread active" the server will have to handle the processing load of both s stems! The server must be si5ed to handle multiple wor*loads or it ma fail as well!
In a multi1node con(iguration where there is one !assive node (or each active node " the servers could be configured so that under average wor*load the use about A.H of C'6 and memor resources! In the @#node configuration depicted in ?igure > below" where failover goes from one active node to a specific passive node" this could mean two active nodes /A9 and A-0 and two passive nodes /'9 and '-0 each with four processors and @(4 of )AM! In this example" node A9 fails over to node '9 and node Afails over to node '- with the extra capacit used to handle pea* wor*loads!
In a multi1node con(iguration where there are more active nodes than !assive nodes " the servers can be configured so that under average wor*load the use a proportional percentage of C'6 and memor resources! In the @#node configuration illustrated in ?igure > above" where nodes A" 4" C" and 7 are configured as active and failover could go to between nodes A and 4 or nodes C and 7" this could mean configuring servers so that the use about -AH of C'6 and memor resources under average wor*load! In this example" node A could fail over to 4 /and vice versa0 or node C could fail over to 7 /and vice versa0! 4ecause the servers in this example would need to handle two wor*loads in case of a node failure" the C'6 and memor configuration would at least be doubled" so instead of using four processors and four (4 of )AM" the servers ma use eight processors and eight (4 of )AM! Shared1nothing 6atabase Con(iguration When 1erver Cluster has multiple active nodes" data must be shared between applications running on the clustered servers! In most cases" this is handled with a shared#nothing database configuration! In a shared#nothing database configuration" the application is partitioned to access private database sections! This means that a particular node is configured with a specific view into the database that allows it to handle specific t pes of re$uests" such as account names that started with the letters A#?" and that it is the onl node that can update the related section of the database! /This eliminates the possibilit of corruption from simultaneous writes b multiple nodes!0
Note 4oth Microsoft ,xchange -... and Microsoft 1F3 1erver -... support multiple active nodes and shared#nothing database configurations!
9.
*artial Im!lementation 6esign With a partial implementation onl essential components are installed at remote sites to2 <andle overflow in pea* periods Maintain uptime on a limited basis in case the primar site fails 'rovide limited services as needed!
%e!licating static content on /eb sites and read1onl# data (rom databases ! This partial implementation techni$ue would allow remote sites to handle re$uests for static content and other t pes of data that is infre$uentl changed! 6sers could browse sites" access account information" product catalogs" and other services! If the needed to access d namic content or modif information /add" change" delete0" the sitesI geographical load balancers could redirect users to the primar site! Im!lement all la#ers o( the in(rastructure8 but with (ewer redundancies in the architecture " or im!lement onl# core com!onents8 rel#ing on the !rimar# site to !rovide the (ull arra# o( (eatures ! With either of these partial implementation techni$ues the design ma need to incorporate near real#time replication and s nchroni5ation for databases and applications! This ensures a consistent state for data and application services! 7eogra!hicall# 6is!ersed Clusters A full or partial design could also use geographicall dispersed clusters running 1erver Cluster! (eographicall dispersed clusters use virtual 3A&s to connect storage area networ*s /1A&s0 over long distances! A J3A& connection with latenc of A.. milliseconds or less ensures that cluster consistenc can be maintained! 1torage extensions and replication" if an " are handled b the hardware" and the clustering infrastructure is not aware of an such implementations! 1ite failures" which could include failure of primar storage" ma re$uire manual intervention to *eep clustering functional! (eographicall dispersed clusters are also referred to as stretched clusters and are available in Windows -... and Windows !&et 1erver! 'a9orit# Node Clustering Windows !&,T 1erver offers man improvements in the area of geographicall dispersed clusters" including a new t pe of $uorum resource called a ma%orit node set! Ma%orit node clustering changes the wa the cluster $uorum resource is used! This allows cluster servers to be geographicall separated while maintaining consistenc in the event of node failure! With a standard cluster configuration" as illustrated in ?igure A below" the $uorum resource writes information on all cluster database changes to the recover logs; this ensures that the cluster configuration and state data can be recovered! The $uorum resource resides on the shared dis* drives and can be used to verif whether other nodes in the cluster are functioning!
99
+i$ure <" !om arin$ local and $eo$ra hicall' dis ersed clusters" With a ma%orit node cluster configuration in Windows !&,T 1erver" the $uorum resource is configured as a ma%orit node set resource! This new t pe of $uorum resource allows the $uorum data" which includes cluster configuration changes and state information" to be stored on the s stem dis* of each node in the cluster! 4ecause the cluster configuration data is stored locall " even though the cluster itself is geographicall dispersed" the cluster can be maintained in a consistent state! In such a setup one does not need to go through complex setups to maintain $uorum information on storage located on the storage interconnect! As the name implies" the ma%orit of nodes must be available for this cluster configuration to operate normall !
9-
Server Cluster
1erver Cluster is used to provide failover support for applications and services! A 1erver Cluster can consist of up to eight nodes! ,ach node is attached to one or more cluster storage devices! Cluster storage devices allow different servers to share the same data" and b reading this data provide failover for resources! Connecting Storage 6evices The preferred techni$ue for connecting storage devices is fibre channel! When using three or more nodes" fibre channel is the onl techni$ue that should be used!
When using -#node clustering with Advanced 1erver" 1C1I or fibre channel can be used to connect to the storage devices!
9>
+i$ure >" Multi=node clusterin$ with all nodes active ?igure B above shows a configuration where all nodes in a database cluster are active and each node has a separate resource group! With a partitioned view of the database" each resource group could handle different t pes of re$uests! The t pes of re$uests handled could be based on one or more factors" such as the name of an account or geographic location! In the event of a failure" each node is configured to fail over to the next node in turn!
%esource 7rou!s
)esources that are related or dependent on each other are associated through resource groups! Onl applications that need high availabilit should be part of a resource group! Other applications can run on a cluster server" but donIt need to be a part of a resource group! 4efore adding an application to a resource group" IT staff must determine if the application can wor* within the cluster environment! Cluster1aware A!!lications! Applications that can wor* within the cluster environment and support cluster events are called cluster#aware! Cluster#aware applications can register with the 1erver Cluster to receive status and notification information! Cluster1unaware A!!lications! Applications that do not support cluster events are called cluster#unaware! 1ome cluster#unaware applications can be assigned to resource groups and can be failed over! Applications that meet the following criteria can be assigned to resource groups! I'#based protocols are used for cluster communications! The application must use an I'#based protocol for their networ* communications! Applications cannot use &et4,6I" I'K" AppleTal* or other protocols to communicate! &odes in the cluster access application data through shared storage devices! If the application isnIt able to store its data in a configurable location" the application data wonIt be available on failover!
9@
Client applications experience a temporar loss of networ* connectivit when failover occurs! If client applications cannot retr and recover from this" the will cease to function normall !
New +eatures (or %esources and %esource )#!es Windows !&,T 1erver adds new features for resources and resource t pes! A new resource t pe allows applications to be made cluster#aware using J41cript and L1cript! Additionall " Windows Management Instrumentation /WMI0 can be used for cluster management and event notification! Architecting %esource 7rou!s When architecting resource groups" IT staff should list all server#based applications and services that will run in the cluster environment" regardless of whether the will need high availabilit ! Afterward" divide the list into three sections2 Those that need to be highl available Those that arenIt part of the cluster and on which clustered resources do not depend
Those that are running on the cluster servers that do not support failover and on which the cluster ma depend! A!!lications and services that need to be highl# available should be !laced into resource grou!s ! Other applications should be trac*ed" and their interactions with clustered applications and services should be clearl understood! ?ailure of an application or service that isn:t part of a resource group shouldnIt impact the core functions of the solution being offered! If it does" the application or service ma need to be clustered! Note In the case of dependent services that don:t support clustering" IT staff ma want to provide bac*up planning in case these services fail" or ma want to attempt to ma*e the services cluster#aware using J41cript and L1cript! )emember that onl Windows !&,T 1erver supports this feature! +ocus on selecting the right hardware to meet the needs o( the service o((ering ! A cluster model should be chosen to ade$uatel support resource failover and the availabilit re$uirements! 4ased on the model chosen" excess capacit should be added to ensure that storage" processor and memor are available in the event a resource fails" and failover to a server substantiall increases the wor*load! /ith a clustered S:L Server con(iguration8 I) sta(( should consider using high1end C*5s8 (ast hard drives and additional memor#! 1F3 1erver -... and standard services together use over 9.. M4 of memor as a baseline! 6ser connections consume about -@ +4 each! While the minimum memor for $uer execution is one M4 of )AM" the average $uer ma re$uire two to four M4 of )AM! Other 1F3 1erver processes use memor as well!
9A
)able ". %AI6 Con(igurations %AI6 Level A89 %AI6 )#!e 7is* striping with parit 8 mirroring %AI6 6escri!tion 1ix or more volumes" each on a separate drive" are configured identicall as a mirrored stripe set with parit error chec*ing! Three or more volumes" each on a separate drive" are configured as a stripe set with parit error chec*ing! In the case of failure" data can be recovered! Two volumes on two drives are configured identicall ! 7ata is written to both drives! If one drive fails" there is no data loss because the other drive contains the data! /7oes not include dis* striping!0 Two or more volumes" each on a separate drive" are striped and mirrored! 7ata is written se$uentiall to drives that are identicall configured! Two or more volumes" each on a separate drive" are configured as a stripe set! 7ata is bro*en into bloc*s" called stripes" and then written se$uentiall to all drives in the stripe set! Advantages ; 6isadvantages 'rovides ver high level of fault tolerance but has a lot of overhead! ?ault tolerance with less overhead than mirroring! 4etter read performance than dis* mirroring! )edundanc ! 4etter write performance than dis* striping with parit !
7is* mirroring
.89
7is* striping
1ome clusters ma not need public networ* addresses and instead ma be configured to use two private networ*s! In this case" the first private networ* is for node#to#node communications and the second private networ* is for communicating with other servers that are a part of the service offering!
9B
than comparable traditional networ*s! Windows -... and Windows !&,T 7atacenter 1erver implement a feature called Winsoc* 7irect that allows direct communication over a 1A& using 1A& providers! SAN !roviders have user1mode access to hardware trans!orts ! When communicating directl at the hardware level" the individual transport endpoints can be mapped directl into the address space of application processes running in user mode! This allows applications to pass messaging re$uests directl to the 1A& hardware interface" which eliminates unnecessar s stem calls and data cop ing! SANs t#!icall# use two trans(er modes! One mode is for small transfers" which primaril consist of transfer control information! ?or large transfers" 1A&s can use a bul* mode whereb data is transferred directl between the local s stem and the remote s stem b the 1A& hardware interface without C'6 involvement on the local or remote s stem! All bul* transfers are pre#arranged through an exchange of transfer control messages! 0ther SAN Bene(its In addition to improved communication modes" 1A&s have other benefits! The allow IT staff to consolidate storage needs" using several highl reliable storage devices instead of man ! The also allow IT staff to share storage with non#Windows operating s stems" allowing for heterogeneous operating environments!
9C
9D
7edicated adapterhandles client#to#cluster networ* traffic" and other traffic originating outside the cluster networ*!
9E
&34 uses unicast or multicast broadcasts to direct incoming traffic to all servers in the cluster! The &34 driver on each host acts as a filter between the cluster adapter and the TC'=I' stac*" allowing onl traffic bound for the designated host to be received! &34 onl controls the flow of TC'" 67' and (), traffic on specified ports! It doesnIt control the flow of TC'" 67' and (), traffic on non#specified ports" and it doesnIt control the flow of other incoming I' traffic! All traffic that isnIt controlled is passed through without modification to the I' stac*! 5sing a Single NLB Network Ada!ter &34 can wor* with a single networ* adapter! When it does so" there are limitations! 5nicast mode! With a single adapter in unicast mode" node#to#node communications are not possible meaning nodes within the cluster cannot communicate with each other! 1ervers can" however" communicate with servers outside the cluster subnet! 'ulticast mode! With a single adapter in multicast mode" node#to#node communications are possible" as are communications with servers outside the cluster subnet! <owever" the configuration is not optimal for handling moderate#to#heav traffic from outside the cluster subnet to specific cluster hosts! ?or handling node#to#node communications and moderate to heav traffic" two adapters should be used!
-.
COM8 components use the Component Ob%ect Model /COM0 and COM8 1ervices to specif their configuration and attributes! (roups of COM8 components that wor* together to handle common functions are referred to as COM8 applications!
CLB-=e# Structures
?igure D below provides an overview of C34! C34 uses several *e structures2 CLB So(tware handles the load balancing and is responsible for determining the order in which cluster members activate components! The router handles message routing between the front#end Web servers and the application servers! It can be implemented through component routing lists stored on front#end Web servers" or a component routing cluster configured on separate servers! A!!lication server clusters activate and run COM8 components! The application server cluster is managed b Application Center -...!
-9
%outing List
The routing list" made available to the router" is used to trac* the response time of each application server from the Web servers! If the routing list is stored on individual Web servers" each server has its own routing list and uses this list to periodicall chec* the response times of the application servers! If the routing list is stored on a separate routing cluster" the routing cluster servers handle this tas*! The goal of trac*ing the response time is to determine which application server has the fastest response time from a given Web server! The response times are trac*ed as an in#memor table" and are used in round robin fashion to determine to which application server an incoming re$uest should be passed! The application server with the fastest response time /and theoreticall " the least bus and most able to handle a re$uest0 is given the next re$uest! The next re$uest goes to the application server with the next fastest time" and so on!
--
->
Summar#
Cluster technologies are becoming increasingl important to that ensure service offerings meet the re$uirements of the enterprise! Windows -... and Windows !&,T server support three cluster technologies to provide high availabilit " reliabilit and scalabilit ! These technologies are2 &34" C34 and 1erver Cluster! These technologies have a specific purpose and are designed to meet different re$uirements! Server Cluster provides failover support for applications and services that re$uire high availabilit " scalabilit and reliabilit " and is ideall suited for bac*#end applications and services" such as database servers! 1erver Cluster can use various combinations of active and passive nodes to provide failover support for mission critical applications and services! NLB provides failover support for I'#based applications and services that re$uire high scalabilit and availabilit " and is ideall suited for Web tier and front#end services! &34 clusters can use multiple adapters and different broadcast methods to assist in the load balancing of TC'" 67' and (), traffic re$uests! Com!onent Load Balancing provides d namic load balancing of middle#tier application components that use COM8 and is ideall suited for application servers! C34 clusters use two clusters! The routing cluster can be configured as a routing list on the front#end Web servers or as separate servers that run 1erver Cluster! Cluster technologies b themselves are not enough to ensure that high availabilit goals can be met! Multiple ph sical locations ma be necessar to guard against natural disasters and other events that ma cause complete service outage! ,ffective processes and procedures" in addition to good architecture" are the *e s to high availabilit !
-@
%elated Links
1ee the following resources for further information2 Windows -... 1erver at http2==www!microsoft!com=windows-...=server= Windows !&,T at http2==www!microsoft!com=net= Application Center -... at http2==www!microsoft!com=applicationcenter= 1erver Clusters M 3oad 4alancing at www!microsoft!com=windows-...=technologies=clustering= Increasing 1 stem )eliabilit and Availabilit with Windows -... at http2==www!microsoft!com=windows-...=server=evaluation=business=relavail!asp <ardware Compatibilit 3ist at http2==www!microsoft!com=hcl= Windows -... 1erver ?amil 2 Advanced 1calabilit at www!microsoft!com=windows-...=advancedserver=evaluation=business=overview=scalable=default!a sp
-A