Implementing SVC
Implementing SVC
Jon Tate
Erwan Auffret
Pawel Brodacki
Libor Miklas
Glen Routley
James Whitaker
Redbooks
International Technical Support Organization
February 2018
SG24-7933-06
Note: Before using this information and the product it supports, read the information in “Notices” on
page xiii.
This edition applies to IBM Spectrum Virtualize V8.1 and the associated hardware and software detailed
within. Note that the screen captures included within this book might differ from the generally available (GA)
version, because parts of this book were written with pre-GA code.
© Copyright International Business Machines Corporation 2011, 2018. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Chapter 3. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Basic planning flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 Planning for availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Connectivity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 Planning for power outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.2 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Planning IP connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.1 Firewall planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6 SAN configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.1 Physical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.2 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6.3 SVC cluster system zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.4 Back-end storage zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.6.5 Host zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.6 Zoning considerations for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . 70
3.6.7 Port designation recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.6.8 Port masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7 iSCSI configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.7.1 iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.7.2 Topology and IP addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.3 General preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.4 iSCSI back-end storage attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.9 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.9.1 The storage pool and SAN Volume Controller cache relationship . . . . . . . . . . . . 79
3.10 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.1 Planning for image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.2 Planning for thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.11 Host attachment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.11.1 Queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.11.2 Offloaded data transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.12 Host mapping and LUN masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.12.1 Planning for large deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.13 NPIV planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
iv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.14.1 FlashCopy guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.14.2 Combining FlashCopy and Metro Mirror or Global Mirror . . . . . . . . . . . . . . . . . . 87
3.14.3 Planning for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.15 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.16 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . . . . . 91
3.17 SAN Volume Controller configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . 92
3.18 IBM Spectrum Virtualize Port Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.19 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.19.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.19.2 Back-end storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.19.3 SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.19.4 IBM Real-time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.19.5 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Contents v
5.7 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.8 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.9 Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.9.1 Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.9.2 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.10 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.10.1 Notifications menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.10.2 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.10.3 System menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.10.4 Support menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.10.5 GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.11 Additional frequent tasks in GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.11.1 Renaming components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.11.2 Changing system topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
vi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7.7 HyperSwap and the mkvolume command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.7.1 Volume manipulation commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.8 Mapping volumes to host after creation of volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.8.1 Mapping newly created volumes to the host using the wizard . . . . . . . . . . . . . . 295
7.9 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
7.10 Migrating volumes using the volume copy feature . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7.11 Volume operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.11.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.11.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.11.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.11.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.11.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
7.11.6 Adding a compressed volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7.11.7 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7.11.8 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
7.11.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.11.10 Using volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.11.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7.11.12 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
7.11.13 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
7.11.14 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.11.15 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.11.16 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . 325
7.11.17 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
7.11.18 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.11.19 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . 327
7.11.20 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . 328
7.11.21 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . 328
7.11.22 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . 330
7.11.23 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . 330
7.11.24 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . 331
7.12 I/O throttling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.12.1 Define a volume throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.12.2 View existing volume throttles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.12.3 Remove a volume throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Contents vii
8.5.3 Adding and deleting a host port by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . 385
8.5.4 Host cluster operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
viii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.1.12 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
11.1.13 FlashCopy and image mode Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
11.1.14 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
11.1.15 Thin provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
11.1.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.1.17 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.1.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
11.1.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . 491
11.1.20 FlashCopy attributes and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
11.2 Managing FlashCopy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.2.1 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.2.2 FlashCopy window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
11.2.3 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
11.2.4 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
11.2.5 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
11.2.6 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
11.2.7 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
11.2.8 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 512
11.2.9 Showing related Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
11.2.10 Moving FlashCopy mappings across Consistency Groups. . . . . . . . . . . . . . . 516
11.2.11 Removing FlashCopy mappings from Consistency Groups . . . . . . . . . . . . . . 517
11.2.12 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
11.2.13 Renaming FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
11.2.14 Deleting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
11.2.15 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
11.2.16 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
11.2.17 Stopping FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
11.2.18 Memory allocation for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
11.3 Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
11.3.1 Considerations for using Transparent Cloud Tiering. . . . . . . . . . . . . . . . . . . . . 529
11.3.2 Transparent Cloud Tiering as backup solution and data migration. . . . . . . . . . 529
11.3.3 Restore using Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
11.3.4 Transparent Cloud Tiering restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
11.4 Implementing Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
11.4.1 DNS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
11.4.2 Enabling Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
11.4.3 Creating cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
11.4.4 Managing cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
11.4.5 Restoring cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
11.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
11.6 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
11.6.1 IBM SAN Volume Controller and Storwize system layers. . . . . . . . . . . . . . . . . 544
11.6.2 Multiple IBM Spectrum Virtualize systems replication. . . . . . . . . . . . . . . . . . . . 546
11.6.3 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
11.6.4 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
11.6.5 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
11.6.6 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
11.6.7 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
11.6.8 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
11.6.9 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
11.6.10 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
11.6.11 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
11.6.12 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Contents ix
11.6.13 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
11.6.14 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
11.6.15 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
11.6.16 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
11.6.17 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
11.6.18 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.19 IBM Spectrum Virtualize HyperSwap topology . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.20 Consistency Protection for GM/MM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 564
11.6.22 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
11.6.23 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
11.7 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
11.7.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
11.7.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
11.7.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
11.7.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
11.7.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 576
11.7.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 576
11.7.7 Changing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 576
11.7.8 Changing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 577
11.7.9 Starting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 577
11.7.10 Stopping Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 577
11.7.11 Starting Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 578
11.7.12 Stopping Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 578
11.7.13 Deleting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 578
11.7.14 Deleting Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . . . 578
11.7.15 Reversing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 579
11.7.16 Reversing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 579
11.8 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
11.8.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
11.8.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
11.8.3 IP Partnership and data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.8.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.8.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
11.8.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
11.8.7 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
11.8.8 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
11.9 Managing Remote Copy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
11.9.1 Creating Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
11.9.2 Creating remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
11.9.3 Creating Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
11.9.4 Renaming remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
11.9.5 Renaming a remote copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 615
11.9.6 Moving stand-alone remote copy relationships to Consistency Group . . . . . . . 616
11.9.7 Removing remote copy relationships from Consistency Group . . . . . . . . . . . . 617
11.9.8 Starting remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
11.9.9 Starting a remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
11.9.10 Switching a relationship copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
11.9.11 Switching a Consistency Group direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
11.9.12 Stopping remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
11.9.13 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
x Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.14 Deleting remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
11.9.15 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
11.9.16 Remote Copy memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
11.10 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
11.10.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
11.10.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Contents xi
13.3 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
13.3.1 Backup using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
13.3.2 Saving the backup using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
13.4 Software update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
13.4.1 Precautions before the update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
13.4.2 IBM Spectrum Virtualize update test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
13.4.3 Update procedure to IBM Spectrum Virtualize V8.1 . . . . . . . . . . . . . . . . . . . . . 707
13.4.4 Updating IBM Spectrum Virtualize with a Hot Spare Node . . . . . . . . . . . . . . . . 713
13.4.5 Updating IBM SAN Volume Controller internal drives code . . . . . . . . . . . . . . . 714
13.4.6 Updating the IBM SAN Volume Controller system manually . . . . . . . . . . . . . . 718
13.5 Health Checker feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
13.6 Troubleshooting and fix procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
13.6.1 Managing event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
13.6.2 Running a fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
13.6.3 Resolve alerts in a timely manner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
13.6.4 Event log details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
13.7 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
13.7.1 Email notifications and the Call Home function. . . . . . . . . . . . . . . . . . . . . . . . . 729
13.7.2 Disabling and enabling notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
13.7.3 Remote Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
13.7.4 SNMP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
13.7.5 Syslog notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
13.8 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
13.9 Collecting support information using the GUI and the CLI . . . . . . . . . . . . . . . . . . . . 744
13.9.1 Collecting information using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
13.9.2 Collecting logs using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
13.9.3 Uploading files to the Support Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
13.10 Service Assistant Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
xii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM® PowerHA®
Bluemix® IBM FlashSystem® Real-time Compression™
DB2® IBM Spectrum™ Redbooks®
developerWorks® IBM Spectrum Accelerate™ Redbooks (logo) ®
DS4000® IBM Spectrum Control™ Storwize®
DS8000® IBM Spectrum Protect™ System Storage®
Easy Tier® IBM Spectrum Scale™ Tivoli®
FlashCopy® IBM Spectrum Storage™ XIV®
HyperSwap® IBM Spectrum Virtualize™
SoftLayer, and The Planet are trademarks or registered trademarks of SoftLayer, Inc., an IBM Company.
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Preface
This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage®
SAN Volume Controller, which is powered by IBM Spectrum™ Virtualize V8.1.
IBM SAN Volume Controller is a virtualization appliance solution that maps virtualized
volumes that are visible to hosts and applications to physical volumes on storage devices.
Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.
The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Catarina Castro, Frank Enders, Giulio Fiscella, Dharmesh Kamdar, Paulo Tomiyoshi Takeda
xvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Thanks to the following people for their contributions to this project:
Lee Sanders
Christopher Bulmer
Paul Cashman
Carlos Fuente
IBM Hursley, UK
Catarina Castro
IBM Manchester, UK
Navin Manohar
Terry Niemeyer
IBM Systems, US
Chris Saul
IBM Systems, US
Detlef Helmbrecht
IBM ATS, Germany
Special thanks to the Brocade Communications Systems staff in San Jose, California for their
support of this residency in terms of equipment and support in many areas:
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Brocade Communications Systems
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xvii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7933-06
for Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.1
as created or updated on February 28, 2018.
New information
Add new look GUI
Hot Spare node
RAS line items
Changed information
Added new GUI windows throughout
The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization or the block aggregation layer. A description of file system
virtualization is beyond the intended scope of this book.
The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and the layers, as shown in Figure 1-1. It illustrates
several layers of a storage domain:
File
Block aggregation
Block subsystem layers
2 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
The IBM SAN Volume Controller is implemented as a clustered appliance in the storage
network layer. The IBM Storwize family is deployed as modular storage that provides
capabilities to virtualize its own internal storage and external storage.
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a Redundant Array of Independent Disks (RAID) array in the underlying disk
subsystem. Specific to the IBM Spectrum Virtualize implementation, the address space that is
mapped between the logical entity is referred to as a volume. The array of physical disks is
referred to as managed disks (MDisks).
The server and application are aware of the logical entities only, They access these entities by
using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which enables a user to move or migrate data between
physical locations, which are referred to as storage pools.
The IBM SAN Volume Controller delivers all these functions in a homogeneous way on a
scalable and high availability software platform over any attached storage and to any attached
server.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
4 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.
Because IBM Spectrum Virtualize provides many functions, such as mirroring and IBM
FlashCopy, there is no need to acquire additional subsets of applications for each attached
disk subsystem that is virtualized by IBM Spectrum Virtualize.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. A block-level virtualization solution, such as IBM
Spectrum Virtualize, can allow significant savings, increase effective capacity of storage
systems up to five times, and decreases your need for floor space, power, and cooling.
The intent of this book is to cover the major software features and provide a brief summary of
supported hardware.
The node model 2145-SV1 features two Intel Xeon E5 v4 eight-core processors and 64
gigabytes (GB) of memory with options to increase the total amount of memory up to 256 GB.
The IBM SAN Volume Controller model 2145-SV1 features three 10-gigabit (Gb) Ethernet
ports that can be configured as iSCSI connectivity and system management. The node model
2145-SV1 supports up to four I/O ports adapters that provide up to sixteen 16 Gb Fibre
Channel ports or up to four 10 Gb Ethernet ports to be configured as iSCSI or Fibre Channel
over Ethernet (FCoE).
In a clustered solution, the IBM SAN Volume Controller node model 2145-SV1 can support up
to 20 expansion enclosures.
The 2145-SV1 model is complemented with the following expansion enclosures (attachable
also to the 2145-DH8):
2145-12F holds up to 12 LFF 3.5” SAS-attached disk drives in 2U enclosures
2145-24F accommodates up to 24 SFF 2.5” SAS-attached disks in 2U enclosures
2145-92F adds up to 92 internal 3.5” SAS flash drives in 5U enclosures
Note: Model 2147 (including expansion enclosures) has identical hardware to Model 2145,
but delivered with the enterprise-level remote IBM support that offers these features:
Technical Advisors to proactively improve problem determination and communication
On-site and remote software installation and updates
Configuration support
Enhanced response times for high severity problems
6 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Enhanced scalability with up to three Peripheral Component Interconnect Express (PCIe)
slot capabilities, which allow users to install up to three 4-port 8 Gbps FC host bus
adapters (HBAs), for a total of 12 ports. It supports one four-port 10 gigabit Ethernet (GbE)
card (iSCSI or FCoE) and one dual-port 12 Gbps serial-attached SCSI (SAS) card for
flash drive expansion unit attachment (model 2145-24F).
Improved Random Access Compression Engine (RACE) with the processing offloaded to
the secondary dedicated processor and using 36 GB of dedicated memory cache. At a
minimum, one Compression Accelerator card needs to be installed (up to 200 compressed
volumes) or two Compression Accelerators allow up to 512 compressed volumes.
Optional 2U expansion enclosure 2145-24F with up to 24 flash drives (200, 400, 800, or
1600 GB).
Extended functionality of IBM Easy Tier® by storage pool balancing mode within the same
tier. It moves or exchanges extents between highly utilized and low-utilized MDisks within
a storage pool, increasing the read and write performance of the volumes. This function is
enabled automatically in IBM SAN Volume Controller, and does not need any licenses.
The SVC cache rearchitecture splits the original single cache into upper and lower caches
of different sizes. Upper cache uses up to 256 megabytes (MB), and lower cache uses up
to 64 GB of installed memory allocated to both processors (if installed). Also, 36 GB of
memory is always allocated for Real-time Compression if enabled.
Near-instant prepare for FlashCopy due to the presence of the lower cache. Multiple
snapshots of the golden image now share cache data (rather than several N copies).
8 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Software changes:
– Visual and functional enhancements in the GUI, with changed menu layout and an
integrated performance meter on main page.
– Implementation of Distributed RAID, which differs from traditional RAID arrays by
eliminating dedicated spare drives. Spare capacity is spread across disks, making the
reconstruction of failed disk faster.
– Introduced software encryption enabled by IBM Spectrum Virtualize and using
AES256-XTS algorithm. Encryption is enabled on the storage pool level. All newly
created volumes in such pool are automatically encrypted. An encryption license with
Universal Serial Bus (USB) flash drives is required.
– Developed the Comprestimator tool, which is included in IBM Spectrum Virtualize
software. It provides statistics to estimate potential storage savings. Available from the
CLI, it does not need compression licenses and does not trigger any compression
process. It uses the same estimation algorithm as an external host-based application,
so results are similar.
– Enhanced GUI wizard for initial configuration of HyperSwap topology. IBM Spectrum
Virtualize now allows IP-attached quorum disks in HyperSwap system configuration.
– Increased the maximum number of iSCSI hosts attached to the system to 2048
(512 host iSCSI qualified names (IQNs) per I/O group) with a maximum of four iSCSI
sessions per SVC node (8 per I/O group).
– Improved and optimized read I/O performance in HyperSwap system configuration by
parallel read from primary and secondary local volume copies. Both copies must be in
a synchronized state.
– Extends the support of VVols. Using IBM Spectrum Virtualize, you can manage
one-to-one partnership of VM drives to IBM SAN Volume Controller volumes. It
eliminates single, shared volume (data store) I/O contention.
– Customizable login banner. Using CLI commands, you can now define and show a
welcome message or important disclaimer on the login window to users. This banner is
shown in GUI or CLI login window.
10 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
1.4 Summary
The use of storage virtualization is the foundation for a flexible and reliable storage solution
helps enterprises to better align business and IT organizations by optimizing the storage
infrastructure and storage management to meet business demands.
IBM Spectrum Virtualize running on IBM SAN Volume Controller is a mature, ninth-generation
virtualization solution that uses open standards and complies with the SNIA storage model.
IBM SAN Volume Controller is an appliance-based, in-band block virtualization process in
which intelligence (including advanced storage functions) is ported from individual storage
devices to the storage network.
IBM Spectrum Virtualize can improve the usage of your storage resources, simplify storage
management, and improve the availability of business applications.
All the concepts included in this chapter are described in greater level of details in later
chapters.
One goal of this project was to create a system that was almost exclusively composed of
commercial off the shelf (COTS) standard parts. As with any enterprise-level storage control
system, it had to deliver a level of performance and availability that was comparable to the
highly optimized storage controllers of previous generations. The idea of building a storage
control system that is based on a scalable cluster of lower performance servers, rather than a
monolithic architecture of two nodes, is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system,” by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at the following website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of IBM System Storage SAN Volume Controller was announced in July 2003.
Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host.
With symmetric virtualization, the virtualization engine is the central configuration point for
the SAN. The virtualization engine directly controls access to the storage, and to the data
that is written to the storage. As a result, locking functions that provide data integrity and
advanced functions, such as cache and Copy Services, can be run in the virtualization
engine itself.
Therefore, the virtualization engine is a central point of control for device and advanced
function management. Symmetric virtualization enables you to build a firewall in the
storage network. Only the virtualization engine can grant access through the firewall.
14 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine.
To solve this problem, you can use an n-way cluster of virtualization engines that has
failover capacity.
You can scale the additional processor power, cache memory, and adapter bandwidth to
achieve the level of performance that you want. Additional memory and processing power
are needed to run advanced services, such as Copy Services and caching. The SVC uses
symmetric virtualization. Single virtualization engines, which are known as nodes, are
combined to create clusters. Each cluster can contain from two to eight nodes.
Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all of the mapping and
the locking tables, and the storage devices contain only data. In asymmetric virtual
storage networks, the data flow is separated from the control flow.
A separate network or SAN link is used for control purposes. Because the control flow is
separated from the data flow, I/O operations can use the full bandwidth of the SAN. A
separate network or SAN link is used for control purposes.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only “knows” about the metadata and not about the data itself.
Logical Entity
(Volume)
SAN
SAN
Virtualization
Virtualization
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.
Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating, or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on,
might be necessary.
Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services and that is open for future interfaces
and protocols. By using the fabric-based appliance solution, you can choose the disk
subsystems that best fit your requirements, and you are not locked into specific SAN
hardware.
For these reasons, IBM chose the SAN-based appliance approach with inline block
aggregation for the implementation of storage virtualization with IBM Spectrum Virtualize.
16 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The IBM SAN Volume Controller includes the following key characteristics:
It is highly scalable, which provides an easy growth path to two-n nodes (grow in a pair of
nodes due to the cluster function).
It is SAN interface-independent. It supports FC and FCoE and iSCSI, but it is also open for
future enhancements.
It is host-independent for fixed block-based Open Systems environments.
It is external storage RAID controller-independent, which provides a continuous and
ongoing process to qualify more types of controllers.
It can use disks that are internal disks that are attached to the nodes (flash drives) or
externally direct-attached in expansion enclosures.
On the SAN storage provided by the disk subsystems, the IBM SAN Volume Controller offers
the following services:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
Mirrors logical volumes
IBM SAN Volume Controller running IBM Spectrum Virtualize V8.1 also provides these
functions:
Large scalable cache
Copy Services
IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to
make multiple targets affordable)
IBM Transparent Cloud Tiering function that allows the IBM SAN Volume Controller to
interact with Cloud Service Providers
Metro Mirror (synchronous copy)
Global Mirror (asynchronous copy)
Data migration
Space management (Thin Provisioning, Compression)
IBM Easy Tier to automatically migrate data between storage types of different
performance, based on disk workload
Encryption of external attached storage
Supporting HyperSwap
Supporting VMware VSphere Virtual Volumes (VVols) and Microsoft ODX
Direct attachment of hosts
Hot Spare nodes with standby function of single or multiple nodes
The objectives of IBM Spectrum Virtualize are to manage storage resources in your IT
infrastructure, and to ensure that they are used to the advantage of your business. These
processes take place quickly, efficiently, and in real time, while avoiding increases in
administrative costs.
IBM Spectrum Virtualize is a core software engine of the whole family of IBM Storwize
products (see Figure 2-2). The contents of this book is intentionally related to the deployment
considerations of IBM SAN Volume Controller.
Terminology note: In this book, the terms IBM SAN Volume Controller and SVC are used
to refer to both models of the most recent products as the text applies similarly to both.
Typically, the hosts cannot see or operate on the same physical storage (logical unit number
(LUN)) from the RAID controller that is assigned to IBM SAN Volume Controller. If the same
LUNs are not shared, storage controllers can be shared between the SVC and direct host
access. The zoning capabilities of the SAN switch must be used to create distinct zones to
ensure that this rule is enforced. SAN fabrics can include standard FC, FCoE, iSCSI over
Ethernet, or possible future types.
18 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-3 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or local area network (LAN). In practical
implementations that have high-availability requirements (most of the target clients for the
SVC), the SAN fabric cloud represents a redundant SAN. A redundant SAN consists of a
fault-tolerant arrangement of two or more counterpart SANs, which provide alternative paths
for each SAN-attached device.
Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SAN Volume
Controller.
Redundant paths to volumes can be provided in both scenarios. For simplicity, Figure 2-3
shows only one SAN fabric and two zones: Host and storage. In a real environment, it is a
leading practice to use two redundant SAN fabrics. IBM SAN Volume Controller can be
connected to up to four fabrics.
A clustered system of IBM SAN Volume Controller nodes that are connected to the same
fabric presents logical disks or volumes to the hosts. These volumes are created from
managed LUNs or managed disks (MDisks) that are presented by the RAID disk subsystems.
As explained in 2.1.1, “IBM SAN Volume Controller architectural overview” on page 14, hosts
are not permitted to operate on the RAID LUNs directly. All data transfer happens through the
IBM SAN Volume Controller nodes.
Additional information: For the most up-to-date information about features, benefits, and
specifications of the IBM SAN Volume Controller models, go to:
https://www.ibm.com/us-en/marketplace/san-volume-controller
The information in this book is valid at the time of writing and covers IBM Spectrum
Virtualize V8.1.0.0. However, as the IBM SAN Volume Controller matures, expect to see
new features and enhanced specifications.
I/O Ports and 3x 10 Gb Ethernet ports for 10 GbE 3x 1 Gb Ethernet ports for 1 GbE
Management iSCSI connectivity and system iSCSI connectivity and system
management management
USB Ports 4 4
SAS Chain 2 2
The following optional features are available for IBM SAN Volume Controller model SV1:
256 GB Cache Upgrade fully unlocked with code V8.1
Four port 16 Gb FC adapter card for 16 Gb FC connectivity
Four port 10 Gb Ethernet adapter card for 10 Gb iSCSI/FCoE connectivity
Compression accelerator card
Four port 12 Gb SAS expansion enclosure attachment card
20 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The following optional features are available for IBM SAN Volume Controller model DH8:
Additional Processor with 32 GB Cache Upgrade
Four port 16 Gb FC adapter card for 16 Gb FC connectivity
Four port 10 Gb Ethernet adapter card for 10 Gb iSCSI/FCoE connectivity
Compression accelerator card
Four port 12 Gb SAS expansion enclosure attachment card
Important: IBM SAN Volume Controller nodes model 2145-SV1 and 2145-DH8 can
contain a 16 Gb FC or a 10 Gb Ethernet adapter, but only one 10 Gbps Ethernet adapter is
supported.
The comparison of current and outdated models of SVC is shown in Table 2-2. Expansion
enclosures are not included in the list.
The IBM SAN Volume Controller expansion enclosure consists of enclosure and drives. Each
enclosure contains two canisters that can be replaced and maintained independently. The
IBM SAN Volume Controller supports three types of expansion enclosure. The expansion
enclosure models are 12F, 24F, and 5U Dense Drawers.
The expansion enclosure model 12F features two expansion canisters and holds up to twelve
3.5-inch SAS drives in a 2U, 19-inch rack mount enclosure.
The expansion enclosure model 24F supports up to twenty-four internal flash, 2.5-inch SAS
drives or a combination of them. The expansion enclosure 24F also features two expansion
canisters in a 2U, 19-inch rack mount enclosure.
The Dense Expansion Drawer supports up to 92 3.5-inch drives in a 5U, 19-inch rack
mounted enclosure.
The SAN is zoned such that the application servers cannot see the back-end physical
storage. This configuration prevents any possible conflict between the IBM SAN Volume
Controller and the application servers that are trying to manage the back-end storage.
In the next topics, the terms IBM SAN Volume Controller and SVC are used to refer to both
models of the IBM SAN Volume Controller product. However, the IBM SAN Volume Controller
is based on the components that are described next.
22 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2.2.1 Nodes
Each IBM SAN Volume Controller hardware unit is called a node. The node provides the
virtualization for a set of volumes, cache, and copy services functions. The SVC nodes are
deployed in pairs (cluster), and one or multiple pairs make up a clustered system or system. A
system can consist of 1 - 4 SVC node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the active nodes are installed in pairs, each node provides a failover function to its
partner node if a node fails.
A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Under normal conditions, the I/Os for that
specific volume are always processed by the same node within the I/O Group. This node is
referred to as the preferred node for this specific volume.
Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. However, both nodes also
act as failover nodes for their respective partner node within the I/O Group. Therefore, a node
takes over the I/O workload from its partner node, if required.
In an SVC-based environment, the I/O handling for a volume can switch between the two
nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.
The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group within the SVC system.
Therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume moves.
It also requires a rescan at the host level to ensure that the multipathing driver is notified that
the allocation of the preferred node changed, and the ports by which the volume is accessed
changed. This modification can be done in the situation where one pair of nodes becomes
overused.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only the SVC
system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.
For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, go to:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
Each dense drawer can hold up 92 drives that are positioned in four rows of 14 and an
additional three rows of 12 mounted drives assemblies. The two Secondary Expander
Modules (SEMs) are centrally located in the chassis. One SEM addresses 54 drive ports,
while the other addresses 38 drive ports.
The drive slots are numbered 1 - 14, starting from the left rear slot and working from left to
right, back to front.
Each canister in the dense drawer chassis features two SAS ports numbered 1 and 2. The
use of the SAS port1 is mandatory because the expansion enclosure must be attached to an
SVC node or another expansion enclosure. SAS connector 2 is optional because it is used to
attach to more expansion enclosures.
24 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-5 shows a dense expansion drawer.
We added a second scale to Figure 2-6 that gives you an idea of how long it takes to access
the data in a scenario where a single CPU cycle takes one second. This scale gives you an
idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.
Since magnetic disks were first introduced by IBM in 1956 (Random Access Memory
Accounting System, also known as the IBM 305 RAMAC), they showed remarkable
performance regarding capacity growth, form factor, and size reduction, price savings (cost
per GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, you can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.
Today’s spinning disks continue to advance in capacity, up to several terabytes (TB), form
factor/footprint (8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and
price (cost per GB), but they are not getting much faster.
The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements likely will occur in the future. However, a
significant step, such as doubling the RPM (if technically even possible), inevitably has an
associated increase in power usage and price that will likely be an inhibitor.
Enterprise-class flash drives typically deliver 500,000 read and 300,000 write IOPS with
typical latencies of 50 µs for reads and 800 µs for writes. Their form factors of 4.57 cm (1.8
inches) /6.35 cm (2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make
them easy to integrate into existing disk shelves. The IOPS metrics significantly improve
when flash drives are consolidated in storage arrays (flash array). In this case, the read and
write IOPS are seen in millions for specific 4 KB data blocks.
26 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Flash-drive market
The flash-drive storage market is rapidly evolving. The key differentiator among today’s
flash-drive products is not the storage medium, but the logic in the disk internal controllers.
The top priorities in today’s controller development are optimally handling what is referred to
as wear-out leveling, which defines the controller’s capability to ensure a device’s durability,
and closing the remarkable gap between read and write I/O performance.
Today’s flash-drive technology is only a first step into the world of high-performance persistent
semiconductor storage. A group of the approximately 10 most promising technologies is
collectively referred to as storage-class memory (SCM).
The most common SSD technology is MLC. They are commonly found in consumer products
such as portable electronic devices. However, they are also strongly present in some
enterprise storage products. Enterprise class SSDs are built on mid to high-endurance
multi-level cell flash technology, mostly known as mainstream endurance SSD.
MLC SSDs uses the multi cell to store data and features the Wear Leveling method, which is
the process to evenly spread data across all memory cells on the SSD. This method helps to
eliminate potential hotspots caused by repetitive write-erase cycles. SLC SSDs uses a single
cell to store one bit of data, and that makes them generally faster.
To support particular business demands, IBM Spectrum Virtualize has qualified the use of
Read Intensive SSDs with applications where the read operations are significantly high. The
IBM Spectrum Virtualize GUI presents new attributes when managing disk drives, using the
GUI and CLI. The new function reports the “write-endurance” limits (in percentages) for each
qualified RI installed in the system.
Read Intensive (RI) SSDs are available as an optional purchase product to the IBM SAN
Volume Controller and the IBM Storwize Family.
For more information about Read Intensive SSDs and IBM Spectrum Virtualize, see Read
Intensive Flash Drives, REDP-5380.
Storage-class memory
SCM promises a massive improvement in performance (IOPS), a real density, cost, and
energy efficiency compared to today’s flash-drive technology. IBM Research is actively
engaged in these new technologies.
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
The flash MDisks can then be placed into a single flash drive tier storage pool. High-workload
volumes can be manually selected and placed into the pool to gain the performance benefits
of flash drives.
For a more effective use of flash drives, place the flash drive MDisks into a multitiered storage
pool that is combined with HDD MDisks (generic_hdd tier). Then, once it is turned on, Easy
Tier automatically detects and migrates high-workload extents onto the solid-state MDisks.
2.2.6 MDisks
The IBM SAN Volume Controller system and its I/O Groups view the storage that is presented
to the SAN by the back-end controllers as several disks or LUNs, which are known as
managed disks or MDisks. Because the SVC does not attempt to provide recovery from
physical disk failures within the back-end controllers, an MDisk often is provisioned from a
RAID array.
However, the application servers do not see the MDisks at all. Rather, they see several logical
disks, which are known as virtual disks or volumes. These disks are presented by the SVC
I/O Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.The MDisks are placed
into storage pools where they are divided into several extents.
For more information about the total storage capacity that is manageable per system
regarding the selection of extents, go to:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
A volume is host-accessible storage that was provisioned out of one storage pool, or, if it is a
mirrored volume, out of two storage pools.
The maximum size of an MDisk is 1 PiB. An IBM SAN Volume Controller system supports up
to 4096 MDisks (including internal RAID arrays). When an MDisk is presented to the IBM
SAN Volume Controller, it can be one of the following statuses:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. The SVC does not write to an MDisk that is in unmanaged mode, except when it
attempts to change the mode of the MDisk to one of the other modes. The SVC can see
the resource, but the resource is not assigned to a storage pool.
Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks that are operating in managed mode might have metadata extents
that are allocated from them and can be used as quorum disks. This mode is the most
common and normal mode for an MDisk.
28 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy the following major usage scenarios:
– Image mode enables the virtualization of MDisks that already contain data that was
written directly and not through an SVC. Rather, it was created by a direct-connected
host.
This mode enables a client to insert the SVC into the data path of an existing storage
volume or LUN with minimal downtime. For more information about the data migration
process, see Chapter 9, “Storage migration” on page 391.
Image mode enables a volume that is managed by the SVC to be used with the native
copy services function that is provided by the underlying RAID controller. To avoid the
loss of data integrity when the SVC is used in this way, it is important that you disable
the SVC cache for the volume.
– The SVC provides the ability to migrate to image mode, which enables the SVC to
export volumes and access them directly from a host without the SVC in the path.
Each MDisk that is presented from an external disk controller has an online path count
that is the number of nodes that has access to that MDisk. The maximum count is the
maximum number of paths that is detected at any point by the system. The current count
is what the system sees at this point. A current value that is less than the maximum can
indicate that SAN fabric paths were lost.
SSDs that are in the SVC 2145-CG8 or flash space, which are presented by the external
Flash Enclosures of the SVC 2145-DH8 or SV1 nodes, are presented to the cluster as
MDisks. To determine whether the selected MDisk is an SSD/Flash, click the link on the
MDisk name to display the Viewing MDisk Details window.
If the selected MDisk is an SSD/Flash that is on an SVC, the Viewing MDisk Details
window displays values for the Node ID, Node Name, and Node Location attributes.
Alternatively, you can select Work with Managed Disks → Disk Controller Systems
from the portfolio. On the Viewing Disk Controller window, you can match the MDisk to the
disk controller system that has the corresponding values for those attributes.
2.2.7 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).
The SVC provides a flexible cache model, and the node’s memory can be used as read or
write cache.
Cache is allocated in 4 KiB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KiB (eight
segments). A track might be only partially populated with valid pages. The SVC combines
writes up to a 256 KiB track size if the writes are in the same tracks before destage. For
example, if 4 KiB is written into a track, another 4 KiB is written to another location in the
same track.
Therefore, the blocks that are written from the SVC to the disk subsystem can be any size
between 512 bytes up to 256 KiB. The large cache and advanced cache management
algorithms allow it to improve on the performance of many types of underlying disk
technologies. The SVC’s capability to manage, in the background, the destaging operations
that are incurred by writes (in addition to still supporting full data integrity) assists with SVC’s
capability in achieving good database performance.
Figure 2-7 shows the separation of the upper and lower cache.
The upper cache delivers the following functions, which enable the SVC to streamline data
write performance:
Provides fast write response times to the host by being as high up in the I/O stack as
possible
Provides partitioning
Combined, the two levels of cache also deliver the following functions:
Pins data when the LUN goes offline
Provides enhanced statistics for IBM Tivoli® Storage Productivity Center, and maintains
compatibility with an earlier version
Provides trace for debugging
Reports medium errors
Resynchronizes cache correctly and provides the atomic write functionality
Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
Supports two-way operation
30 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Supports none, read-only, and read/write as user-exposed caching policies
Supports flush-when-idle
Supports expanding cache as more memory becomes available to the platform
Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O Group
Enables switching of the preferred node without needing to move volumes between I/O
Groups
Depending on the size, age, and technology level of the disk storage system, the total
available cache in the IBM SAN Volume Controller nodes can be larger, smaller, or about the
same as the cache that is associated with the disk storage.
Because hits to the cache can occur in either the SVC or the disk controller level of the overall
system, the system as a whole can take advantage of the larger amount of cache wherever
the cache is located. Therefore, if the storage controller level of the cache has the greater
capacity, expect hits to this cache to occur, in addition to hits in the SVC cache.
In addition, regardless of their relative capacities, both levels of cache tend to play an
important role in enabling sequentially organized data to flow smoothly through the system.
The SVC cannot increase the throughput potential of the underlying disks in all cases
because this increase depends on both the underlying storage technology and the degree to
which the workload exhibits hotspots or sensitivity to cache size or cache algorithms.
SVC V7.3 introduced a major upgrade to the cache code and in association with 2145-DH8
hardware it provided an additional cache capacity upgrade. A base SVC node configuration
included 32 GB of cache. Adding the second processor and cache upgrade for Real-time
Compression (RtC) took a single node to a total of 64 GB of cache. A single I/O Group with
support for RtC contained 128 GB of cache, whereas an eight node SVC system with a
maximum cache configuration contained a total of 512 GB of cache.
These limits have been enhanced with 2145-SV1 appliance with SVC V8.1. Before this
release, the SVC memory manager (PLMM) could only address 64 GB of memory. In V8.1,
the underlying PLMM has been rewritten and the structure size increased. The cache size
can be now upgraded up to 256 GB and the whole memory can now be used. However, the
write cache is still assigned to a maximum of 12 GB and compression cache to a maximum of
34 GB. The remaining installed cache is simply used as read cache (including allocation for
features like FlashCopy, Global or Metro Mirror, and so on).
Important: When upgrading to a V8.1 system, where there is already more than 64 GB of
physical memory installed (but not used), the error message “1199 Detected hardware
needs activation” appears after the upgrade in the GUI event log (and error code 0x841
as a result of lseventlog command in CLI).
The nodes are split into groups where the remaining nodes in each group can communicate
with each other, but not with the other group of nodes that were formerly part of the system. In
this situation, some nodes must stop operating and processing I/O requests from hosts to
preserve data integrity while maintaining data access. If a group contains less than half the
nodes that were active in the system, the nodes in that group stop operating and processing
I/O requests from hosts.
It is possible for a system to split into two groups, with each group containing half the original
number of nodes in the system. A quorum disk determines which group of nodes stops
operating and processing I/O requests. In this tie-break situation, the first group of nodes that
accesses the quorum disk is marked as the owner of the quorum disk. As a result, the owner
continues to operate as the system, handling all I/O requests.
If the other group of nodes cannot access the quorum disk, or finds the quorum disk is owned
by another group of nodes, it stops operating as the system and does not handle I/O
requests. A system can have only one active quorum disk used for a tie-break situation.
However, the system uses three quorum disks to record a backup of system configuration
data to be used if there is a disaster. The system automatically selects one active quorum
disk from these three disks.
The other quorum disk candidates provide redundancy if the active quorum disk fails before a
system is partitioned. To avoid the possibility of losing all of the quorum disk candidates with a
single failure, assign quorum disk candidates on multiple storage systems.
Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.
32 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.
When the set of quorum disk candidates is chosen, it is fixed. However, a new quorum disk
candidate can be chosen in one of the following conditions:
When the administrator requests that a specific MDisk becomes a quorum disk by using
the chquorum command
When an MDisk that is a quorum disk is deleted from a storage pool
When an MDisk that is a quorum disk changes to image mode
For disaster recovery purposes, a system must be regarded as a single entity so that the
system and the quorum disk must be colocated.
Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information, see IBM
Knowledge Center.
Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).
Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.
During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).
If the Licensed Internal Code on a node becomes corrupted, which results in a failure, the
workload is transferred to another node. The code on the failed node is repaired, and the
node is readmitted into the system (which is an automatic process).
IP quorum configuration
In a stretched configuration or HyperSwap configuration, you must use a third, independent
site to house quorum devices. To use a quorum disk as the quorum device, this third site must
use Fibre Channel or IP connectivity together with an external storage system. In a local
environment, no extra hardware or networking, such as Fibre Channel or SAS-attached
storage, is required beyond what is normally always provisioned within a system.
To use an IP-based quorum application as the quorum device for the third site, no Fibre
Channel connectivity is used. Java applications are run on hosts at the third site. However,
there are strict requirements on the IP network, and some disadvantages with using IP
quorum applications.
Unlike quorum disks, all IP quorum applications must be reconfigured and redeployed to
hosts when certain aspects of the system configuration change. These aspects include
adding or removing a node from the system, or when node service IP addresses are
changed.
Even with IP quorum applications at the third site, quorum disks at site one and site two are
required because they are used to store metadata. To provide quorum resolution, use the
mkquorumapp command to generate a Java application that is copied from the system and run
on a host at a third site. The maximum number of applications that can be deployed is five.
Currently, supported Java runtime environments (JREs) are IBM Java 7.1 and IBM Java 8.
At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes.
34 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-8 shows the relationships of the SVC entities to each other.
Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD
Each MDisk in the storage pool is divided into several extents. The size of the extent is
selected by the administrator when the storage pool is created, and cannot be changed later.
The size of the extent is 16 MiB - 8192 MiB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring to copy volumes
between pools.
The SVC limits the number of extents in a system to 222 =~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator.
2.2.11 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks. They can see only the logical volumes that are created
from combining extents from a storage pool.
Sequential
A sequential volume is where the extents are allocated one after the other, from one
MDisk to the next MDisk (Figure 2-10).
36 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Image mode
Image mode volumes (Figure 2-11) are special volumes that have a direct relationship
with one MDisk. The most common use case of image volumes is a data migration from
your old (typically non-virtualized) storage to the SVC-based virtualized infrastructure.
When the image mode volume is created, a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system.
Some virtualization functions are not available for image mode volumes, so it is often useful to
migrate the volume into a new storage pool. After the migration completion, the MDisk
becomes a managed MDisk.
If you add new MDisk containing any historical data to a storage pool, all data on the MDisk is
lost. Ensure that you create image mode volumes from MDisks that contain data before
adding MDisks to the storage pools.
Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the
Easy Tier function that is turned on in a multitier storage pool over a 24-hour period.
Next, it creates an extent migration plan that is based on this activity, and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled down from the high-tier MDisks back to a
lower-tiered MDisk.
The automatic load balancing function is enabled by default on each volume, and cannot be
turned off using the GUI. This load balancing feature is not considered to be an Easy Tier
function, although it uses the same principles.
The IBM Easy Tier function can make it more appropriate to use smaller storage pool extent
sizes. The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Tier Advisor Tool (STAT) to create a summary report. STAT is available on the web at
no initial cost at the following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
A more detailed description of Easy Tier is provided in Chapter 10, “Advanced features for
storage efficiency” on page 407.
2.2.13 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.
Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.
iSCSI is an alternative way of attaching hosts and starting with SVC V7.7. In addition,
back-end storage can be attached by using iSCSI. This configuration is very useful for
migration purposes from non-Fibre-Channel-based environments to the new virtualized
solution.
Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or IQNs that are configured
on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target
(destination) adapter. Host objects can have IQNs and WWPNs.
38 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2.2.15 RAID
When planning your network, consideration must be given to the type of RAID configuration.
The IBM SAN Volume Controller supports either the traditional array configuration or the
distributed array.
An array can contain 2 - 16 drives; several arrays create the capacity for a pool. For
redundancy, spare drives (“hot spares”) are allocated to assume read/write operations if any
of the other drives fail. The rest of the time, the spare drives are idle and do not process
requests for the system.
When an array member drive fails, the system automatically replaces the failed member with
a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives
can be manually exchanged with array members.
Distributed array configurations can contain 4 - 128 drives. Distributed arrays remove the
need for separate drives that are idle until a failure occurs. Rather than allocating one or more
drives as spares, the spare capacity is distributed over specific rebuild areas across all the
member drives. Data can be copied faster to the rebuild area and redundancy is restored
much more rapidly. Additionally, as the rebuild progresses, the performance of the pool is
more uniform because all of the available drives are used for every volume extent.
After the failed drive is replaced, data is copied back to the drive from the distributed spare
capacity. Unlike hot spare drives, read/write requests are processed on other parts of the
drive that are not being used as rebuild areas. The number of rebuild areas is based on the
width of the array.
2.2.16 Encryption
The IBM SAN Volume Controller provides optional encryption of data at rest, which protects
against the potential exposure of sensitive user data and user metadata that is stored on
discarded, lost, or stolen storage devices. Encryption of system data and system metadata is
not required, so system data and metadata are not encrypted.
Planning for encryption involves purchasing a licensed function and then activating and
enabling the function on the system.
To encrypt data that is stored on drives, the nodes capable of encryption must be licensed
and configured to use encryption. When encryption is activated and enabled on the system,
valid encryption keys must be present on the system when the system unlocks the drives or
the user generates a new key.
In IBM Spectrum Virtualize V7.4, hardware encryption was introduced, with software
encryption option introduced in V7.6. Encryption keys could be either managed by IBM
Security Key Lifecycle Manager (SKLM) or stored on USB flash drives attached to a minimum
of one of the nodes. V8.1 now also allows a combination of SKLM and USB key repositories.
IBM Security Key Lifecycle Manager is an IBM solution to provide the infrastructure and
processes to locally create, distribute, backup, and manage the lifecycle of encryption keys
and certificates. Before activating and enabling encryption, you must determine the method of
accessing key information during times when the system requires an encryption key to be
present.
Data encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses
a 256-bit symmetric encryption key in XTS mode, as defined in the Institute of Electrical and
Electronics Engineers (IEEE) 1619-2007 standard as XTS-AES-256. That data encryption
key is itself protected by a 256-bit AES key wrap when stored in non-volatile form.
As data security and encryption plays significant role in today’s storage environments, this
book provides more details in Chapter 12, “Encryption” on page 633.
2.2.17 iSCSI
iSCSI is an alternative means of attaching hosts and external storage controllers to the IBM
SAN Volume Controller.
The iSCSI function is a software function that is provided by the IBM Spectrum Virtualize
code, not hardware. In V7.7, IBM introduced software capabilities to allow the underlying
virtualized storage to attach to IBM SAN Volume Controller using iSCSI protocol.
iSCSI protocol allows the transport of SCSI commands and data over an Internet Protocol
network, which is based on IP routers and Ethernet switches. iSCSI is a block-level protocol
that encapsulates SCSI commands. Therefore, it uses an existing IP network rather than
Fibre Channel infrastructure.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the Internet Protocol network, especially
over a potentially unreliable IP network.
Every iSCSI node in the network must have an iSCSI name and address:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
40 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
IBM Real-time Compression provides the following benefits:
Compression for active primary data. IBM Real-time Compression can be used with active
primary data.
Compression for replicated/mirrored data. Remote volume copies can be compressed in
addition to the volumes at the primary storage tier. This process reduces storage
requirements in Metro Mirror and Global Mirror destination volumes as well.
No changes to the existing environment are required. IBM Real-time Compression is part
of the storage system.
Overall savings in operational expenses. More data is stored in a rack space, so fewer
storage expansion enclosures are required to store a data set. This reduced rack space
has the following benefits:
– Reduced power and cooling requirements. More data is stored in a system, requiring
less power and cooling per gigabyte or used capacity.
– Reduced software licensing for additional functions in the system. More data stored per
enclosure reduces the overall spending on licensing.
Disk space savings are immediate. The space reduction occurs when the host writes the
data. This process is unlike other compression solutions, in which some or all of the
reduction is realized only after a post-process compression batch job is run.
2.2.19 IP replication
IP replication was introduced in the V7.2 and allows data replication between IBM Spectrum
Virtualize family members. IP replication uses IP-based ports of the cluster nodes.
IP replication function is transparent to servers and applications in the same way that
traditional FC-based mirroring is. All remote mirroring modes (Metro Mirror, Global Mirror, and
Global Mirror with changed volumes) are supported.
The configuration of the system is straightforward and IBM Storwize family systems normally
find each other in the network and can be selected from the GUI.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many “hops”
between switches and other appliances in the network. Traditional replication solutions
transmit data, wait for a response, and then transmit more data, which can result in network
utilization as low as 20% (based on IBM measurements). In addition, this scenario gets worse
the longer the latency.
Bridgeworks SANSlide technology, which is integrated with the IBM Storwize family, requires
no separate appliances and so requires no additional cost and no configuration steps. It uses
artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting
automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x. Therefore, customers can deploy
a less costly network infrastructure, or take advantage of faster data transfer to speed
replication cycles, improve remote data currency, and enjoy faster recovery.
Starting with V6.3 (now IBM Spectrum Virtualize), copy services functions are implemented
within a single IBM SAN Volume Controller or between multiple members of the IBM
Spectrum Virtualize family.
The copy services layer sits above and operates independently of the function or
characteristics of the underlying disk subsystems used to provide storage resources to an
IBM SAN Volume Controller.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary volumes before the application considers the updates complete. Therefore, the
secondary volume is fully up to date if it is needed in a failover. However, the application is
fully exposed to the latency and bandwidth limitations of the communication link to the
secondary volume. In a truly remote situation, this extra latency can have a significant
adverse effect on application performance.
Special configuration guidelines exist for SAN fabrics and IP networks that are used for data
replication. There must be considerations in regards the distance and available bandwidth of
the intersite links.
A function of Global Mirror designed for low bandwidth has been introduced in IBM Spectrum
Virtualize. It uses change volumes that are associated with the primary and secondary
volumes. These volumes are used to record changes to the remote copy volume, the
FlashCopy relationship that exists between the secondary volume and the change volume,
and between the primary volume and the change volume. This function is called Global
Mirror cycling mode.
42 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-12 shows an example of this function where you can see the relationship between
volumes and change volumes.
In asynchronous remote copy, the application acknowledges that the write is complete before
the write is committed at the secondary volume. Therefore, on a failover, certain updates
(data) might be missing at the secondary volume. The application must have an external
mechanism for recovering the missing updates, if possible. This mechanism can involve user
intervention. Recovery on the secondary site involves starting the application on this recent
backup, and then rolling forward or backward to the most recent commit point.
FlashCopy
FlashCopy is sometimes described as an instance of a time-zero (T0) copy or a point-in-time
(PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy allows the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
With IBM Spectrum Virtualize, multiple target volumes can undergo FlashCopy from the same
source volume. This capability can be used to create images from separate points in time for
the source volume, and to create multiple images from a source volume at a common point in
time. Source and target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship, and without waiting for the original copy
operation to complete. IBM Spectrum Virtualize supports multiple targets, and therefore
multiple rollback points.
The Transparent Cloud Tiering function helps organizations to reduce costs related to power
and cooling when off-site data protection is required to send sensitive data out of the main
site.
Transparent Cloud Tiering uses IBM FlashCopy techniques that provide full and incremental
snapshots of one or more volumes. Snapshots are encrypted and compressed before being
uploaded to the cloud. Reverse operations are also supported within that function. When a
set of data is transferred out to cloud, the volume snapshot is stored as object storage.
IBM Cloud Object Storage uses innovative approach and cost-effective solution to store large
amount of unstructured data and delivers mechanisms to provide security services, high
availability, and reliability.
The management GUI provides an easy-to-use initial setup, advanced security settings, and
audit logs that records all backup and restore to cloud.
Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.
The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.
The eight-node limit for an SVC system is a limitation that is imposed by the Licensed Internal
Code, and not a limit of the underlying architecture. Larger system configurations might be
available in the future.
44 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes.
The clustered system software makes the code portable. It provides the means to keep the
single instances of the SVC code that are running on separate systems’ nodes in sync.
Therefore, restarting nodes during a code upgrade, adding new nodes, removing old nodes
from a system, or failing nodes cannot affect the SVC’s availability.
All active nodes of a system must know that they are members of the system. This knowledge
is especially important in situations where it is key to have a solid mechanism to decide which
nodes form the active system, such as the split-brain scenario where single nodes lose
contact with other nodes. A worst case scenario is a system that splits into two separate
systems.
Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.
The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.
The lowest Node Unique ID in a system becomes the boss node for the group of nodes. It
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes’ Ethernet ports to allow access for system management.
Stretched Clusters are considered high availability (HA) solutions because both sites work as
instances of the production environment (there is no standby location). Combined with
application and infrastructure layers of redundancy, Stretched Clusters can provide enough
protection for data that requires availability and resiliency.
When the IBM SAN Volume Controller was first introduced, the maximum supported distance
between nodes within an I/O Group was 100 meters. With the evolution of code and
introduction of new features, IBM SAN Volume Controller V5.1 introduced support for the
Stretched Cluster configuration. In this configuration, nodes within an I/O Group can be
separated by a distance of up to 10 kilometers (km) using specific configurations.
Within IBM Spectrum Virtualize V7.5, the site awareness concept has been extended to
hosts. This change enables more efficiency for host I/O traffic through the SAN, and an easier
host path management.
IBM Spectrum Virtualize V7.6 introduces a new feature for stretched systems, the IP Quorum
application. Using an IP-based quorum application as the quorum device for the third site, no
Fibre Channel connectivity is required. Java applications run on hosts at the third site.
However, there are strict requirements on the IP network with using IP quorum applications.
Unlike quorum disks, all IP quorum applications must be reconfigured and redeployed to
hosts when certain aspects of the system configuration change.
IP Quorum details can be found in IBM Knowledge Center for SAN Volume Controller:
https://ibm.biz/BdsmN2
Note: Stretched cluster and Enhanced Stretched Cluster features are supported only for
IBM SAN Volume Controller. They are not supported in IBM Storwize family of products.
The HyperSwap feature provides highly available volumes accessible through two sites at up
to 300 km apart. A fully independent copy of the data is maintained at each site. When data is
written by hosts at either site, both copies are synchronously updated before the write
operation is completed. The HyperSwap feature will automatically optimize itself to minimize
data transmitted between sites and to minimize host read and write latency.
46 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For further technical details and implementation guidelines on deploying Stretched Cluster or
Enhanced Stretched Cluster, see IBM Spectrum Virtualize and SAN Volume Controller
Enhanced Stretched Cluster with VMware, SG24-8211.
Up to four nodes can be added to a single cluster and they must match the hardware type and
configuration of your active cluster nodes. That is, in a mixed node cluster you should have
one of each node type. Given that V8.1 is only supported on SVC 2145-DH8 and 2145-SV1
nodes, this mixture is not a problem but is something to be aware of. Most clients upgrade the
whole cluster to a single node type anyway following best practices. However, in addition to
the node type, the hardware configurations must match. Specifically, the amount of memory
and number and placement of Fibre Channel/Compression cards must be identical.
The Hot Spare node essentially becomes another node in the cluster. but is not doing
anything under normal conditions. Only when it is needed does it use the NPIV feature of the
host virtual ports to take over the personality of the failed node. There is approximately a
minute before the cluster swaps in a node. This delay is set intentionally to avoid any
thrashing around when a node fails and the system must be sure it has definitely failed, and
not just, for example, rebooting.
Because you have NPIV enabled, the host should not notice anything during this time. The
first thing that happens is the failed nodes virtual host ports failover to the partner node. Then,
when the spare swaps in they fail over to that node. The cache will flush while only one node
is in the IO Group, but when the spare swaps in you get the full cache back.
Note: Warm start of active node (code assert or restart) will not cause the hot spare to
swap in because the rebooted node becomes available within one minute.
The other use case for Hot Spare nodes is during a software upgrade. Normally the only
impact during an upgrade is slightly degraded performance. While the node that is upgrading
is down, the partner in the IO Group will be writing through cache and handling both nodes
workload. So to work around this limitation, the cluster takes a spare in place of the node that
is upgrading. Therefore, the cache does not need to go into write through mode.
After the upgraded node returns, it is swapped back so you end up rolling through the nodes
as normal but without any failover and failback seen at the multipathing layer. All of this
process is handled by the NPIV ports and so should make upgrades seamless for
administrators working in large enterprise SVC deployments.
Note: After the cluster commits new code, it will also automatically upgrade Hot Spares to
match the cluster code level.
This feature is available only to SVC. While Storwize systems can make use of NPIV and get
the general failover benefits, you cannot get spare canisters or split IO group in Storwize
V7000.
You can maintain a chat session with the IBM service representative so that you can monitor
this activity and either understand how to fix the problem yourself or allow the representative
to fix it for you.
To use the IBM Assist On-site tool, the master console must be able to access the Internet.
The following website provides further information about this tool:
http://www.ibm.com/support/assistonsite/
When you access the website, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site session. A
plug-in is downloaded on to your master console to connect you and your IBM service
representative to the remote service session. The IBM Assist On-site tool contains several
layers of security to protect your applications and your computers. The plug-in is removed
after the next reboot.
You can also use security features to restrict access by the IBM service representative. Your
IBM service representative can provide you with more detailed instructions for using the tool.
The embedded part of the SVC V8.1 code is a software toolset called Remote Support Client.
It establishes a network connection over a secured channel with Remote Support Server in
the IBM network. The Remote Support Server provides predictive analysis of SVC status and
assists administrators for troubleshooting and fix activities. Remote Support Assistance is
available at no extra charge, and no additional license is needed.
Each event that IBM SAN Volume Controller detects is assigned a notification type of Error,
Warning, or Information. You can configure the IBM SAN Volume Controller to send each
type of notification to specific recipients.
48 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the IBM Spectrum
Virtualize.
IBM SAN Volume Controller can send syslog messages that notify personnel about an event.
The event messages can be sent in either expanded or concise format. You can use a syslog
manager to view the syslog messages that IBM SAN Volume Controller sends.
IBM Spectrum Virtualize uses the User Datagram Protocol (UDP) to transmit the syslog
message. You can use the management GUI or the CLI to configure and modify your syslog
settings.
To send email, you must configure at least one SMTP server. You can specify as many as five
more SMTP servers for backup purposes. The SMTP server must accept the relaying of email
from the IBM SAN Volume Controller clustered system IP address. You can then use the
management GUI or the CLI to configure the email settings, including contact information and
email recipients. Set the reply address to a valid email address.
Send a test email to check that all connections and infrastructure are set up correctly. You can
disable the Call Home function at any time by using the management GUI or CLI.
Chapter 3. Planning
This chapter describes steps that are required to plan the installation of an IBM System
Storage SAN Volume Controller in your storage network.
Important: At the time of writing, the statements provided in this book are correct, but they
might change. Always verify any statements that are made in this book with the IBM SAN
Volume Controller supported hardware list, device driver, firmware, and recommended
software levels that are available at the following websites:
Support Information for SAN Volume Controller:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC):
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
To maximize benefit from the SAN Volume Controller, pre-installation planning must include
several important steps. These steps ensure that the SAN Volume Controller provides the
best possible performance, reliability, and ease of management for your application needs.
The correct configuration also helps minimize downtime by avoiding changes to the SAN
Volume Controller and the storage area network (SAN) environment to meet future growth
needs.
Note: Make sure that the planned configuration is reviewed by IBM or an IBM Business
Partner before implementation. Such review can both increase the quality of the final
solution and prevent configuration errors that could impact solution delivery.
This book is not intended to provide in-depth information about described topics. For an
enhanced analysis of advanced topics, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
Below is a list of items that you should consider when planning for the SAN Volume
Controller:
Collect and document the number of hosts (application servers) to attach to the SAN
Volume Controller. Identify the traffic profile activity (read or write, sequential, or random),
and the performance requirements (bandwidth and input/output (I/O) operations per
second (IOPS)) for each host.
Collect and document the following information:
– Information on the existing back-end storage that is present in the environment and is
intended to be virtualized by the SAN Volume Controller.
– Whether you need to configure image mode volumes. If you want to use image mode
volumes, decide whether and how you plan to migrate them into managed mode
volumes.
– Information on the planned new back-end storage to be provisioned on the SAN
Volume Controller.
52 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
– The required virtual storage capacity for fully provisioned and space-efficient (SE)
volumes.
– The required storage capacity for local mirror copy (volume mirroring).
– The required storage capacity for point-in-time copy (IBM FlashCopy).
– The required storage capacity for remote copy (Metro Mirror and Global Mirror).
– The required storage capacity for compressed volumes.
– The required storage capacity for encrypted volumes.
– Shared storage (volumes presented to more than one host) required in your
environment.
– Per host:
• Volume capacity.
• Logical unit number (LUN) quantity.
• Volume sizes.
Note: When planning the capacities, make explicit notes if the numbers state the net
storage capacity (that is, available to be used by applications running on any host), or
gross capacity, which includes overhead for spare drives (both due to RAID redundancy
and planned hot spare drives) and for file system metadata. For file system metadata,
include overhead incurred by all layers of storage virtualization. In particular, if you plan
storage for virtual machines whose drives are actualized as files on a parallel file
system, then include metadata overhead for the storage virtualization technology used
by your hypervisor software.
Decide whether you need to plan for more than one site. For multi-site deployment, review
the additional configuration requirements imposed.
Define the number of clusters and the number of pairs of nodes (1 - 4) for each cluster.
The number of necessary I/O Groups depends on the overall performance requirements
and the number of hosts you plan to attach.
Decide whether you are going to use N_Port ID Virtualization (NPIV). If you plan to use
NPIV, then review the additional configuration requirements imposed.
Design the SAN according to the requirement for high availability (HA) and best
performance. Consider the total number of ports and the bandwidth that is needed at each
link, especially Inter-Switch Links (ISLs). Consider ISL trunking for improved performance.
Separately collect requirements for Fibre Channel and IP-based storage network.
Note: Check and carefully count the required ports. Separately note the ports
dedicated for extended links. Especially in an enhanced stretched cluster (ESC) or
HyperSwap environment, you might need additional long wave gigabit interface
converters (GBICs).
Define a naming convention for the SAN Volume Controller clusters, nodes, hosts, and
storage objects.
Define the SAN Volume Controller service Internet Protocol (IP) addresses and the
system’s management IP addresses.
Define subnets for the SAN Volume Controller system and for the hosts for Internet Small
Computer System Interface (iSCSI) connectivity.
Define the IP addresses for IP replication (if required).
Define back-end storage that will be used by the system.
Chapter 3. Planning 53
Define the managed disks (MDisks) in the back-end storage to be used by SAN Volume
Controller.
Define the storage pools, specify MDisks for each pool and document mapping of MDisks
to back-end storage. Parameters of the back-end storage determine the characteristics of
the volumes in the pool. Make sure that each pool contains MDisks of similar (ideally,
identical) performance characteristics.
Plan allocation of hosts and volumes to I/O Groups to optimize the I/O load distribution
between the hosts and the SAN Volume Controller. Allowing a host to access more than
one I/O group might better distribute the load between system nodes. However, doing so
will reduce the maximum number of hosts attached to the SAN Volume Controller.
Plan queue depths for the attached hosts. For more information, see this website:
https://ibm.biz/BdjKcK
Plan for the physical location of the equipment in the rack.
Verify that your planned environment is a supported configuration.
Verify that your planned environment does not exceed system configuration limits.
Planning activities required for SAN Volume Controller deployment are described in the
following sections.
Note: If you are installing a hot-spare node, the Fibre Channel cabling must be identical
for all nodes of the system. In other words, port 1 on every node must be connected to
the same fabric, port 2 on every node must be connected to the same fabric, and so on.
54 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Quorum disk placement
The SAN Volume Controller uses three MDisks as quorum disks for the clustered system.
A preferred practice is to have each quorum disk in a separate storage subsystem, where
possible. The current locations of the quorum disks can be displayed by using the
lsquorum command, and relocated by using the chquorum command.
Failure domain sizes
Failure of an MDisk takes the whole storage pool offline that contains this MDisk. To
reduce impact of an MDisk failure, consider reducing the number of back-end storage
systems per storage pool, and increasing the number of storage pools and reducing their
size. Note that this configuration in turn limits the maximum performance of the pool (fewer
back-end systems to share the load), increases storage management effort, can lead to
less efficient storage capacity consumption, and might be subject to limitation by system
configuration maximums.
Consistency
Strive to achieve consistent availability levels of all system building blocks. For example, if
the solution relies on a single switch placed in the same rack as one of the SAN Volume
Controller nodes, investment in a dual-rack configuration for placement of the second
node is not justified. Any incident affecting the rack that holds the critical switch brings
down the whole system, no matter where the second SAN Volume Controller node is
placed.
SAN Volume Controller supports SAN routing technologies between SAN Volume Controller
and storage systems, as long as the routing stays entirely within Fibre Channel connectivity
and does not use other transport technologies such as IP. However, SAN routing technologies
(including FCIP links) are supported for connections between the SAN Volume Controller and
hosts. The use of long-distance FCIP connections might degrade the storage performance for
any servers that are attached through this technology.
Table 3-1 shows the fabric type that can be used for communicating between hosts, nodes,
and back-end storage systems. All fabric types can be used at the same time.
When you plan deployment of SAN Volume Controller, identify networking technologies that
you will use.
Chapter 3. Planning 55
3.4 Physical planning
You must consider several key factors when you are planning the physical site of a SAN
Volume Controller installation. The physical site must have the following characteristics:
Meets power, cooling, and location requirements of the SAN Volume Controller nodes.
Has two separate power sources.
There is sufficient rack space for controller nodes installation.
Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not
exceed maximum power rating of the rack. For more information about the power
requirements, see the following website:
https://ibm.biz/Bdjvhm
For more information about SAN Volume Controller nodes rack installation planning, including
environmental requirements and sample rack layouts, see:
https://ibm.biz/Bdjm5Y for 2145-DH8
https://ibm.biz/Bdjm5z for 2145-SV1
The functionality of UPS units is provided by internal batteries, which are delivered with each
node’s hardware. The batteries ensure that during external power loss or disruption, the node
is kept operational long enough to copy data from its physical memory to its internal disk drive
and shut down gracefully. This process enables the system to recover without data loss when
external power is restored.
For more information about the 2145-DH8 Model, see IBM SAN Volume Controller 2145-DH8
Introduction and Implementation, SG24-8229.
For more information about installing the 2145-SV1, see IBM Knowledge Center:
https://ibm.biz/Bdr7wp
3.4.2 Cabling
Create a cable connection table that follows your environment’s documentation procedure to
track all of the following connections that are required for the setup:
Power
Ethernet
iSCSI or Fibre Channel over Ethernet (FCoE) connections
Switch ports (FC, Ethernet, and FCoE)
When planning SAN cabling, make sure that your physical topology allows you to observe
zoning rules and recommendations.
If the data center provides more than one power source, make sure that you use that capacity
when planning power cabling for your system.
56 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.5 Planning IP connectivity
Starting with V6.1, system management is performed through an embedded graphical user
interface (GUI) running on the nodes. To access the management GUI, direct a web browser
to the system management IP address.
The SAN Volume Controller 2145-DH8 node has a feature called a Technician port. Ethernet
port 4 is allocated as the Technician service port, and is marked with a T. All initial
configuration for each node is performed by using the Technician port. The port runs a
Dynamic Host Configuration Protocol (DHCP) service so that any notebook or computer
connected to the port is automatically assigned an IP address.
After the cluster configuration has been completed, the Technician port automatically routes
the connected user directly to the service GUI.
Note: The default IP address for the Technician port on a 2145-DH8 Node is 192.168.0.1.
If the Technician port is connected to a switch, it is disabled and an error is logged.
Each SAN Volume Controller node requires one Ethernet cable to connect it to an Ethernet
switch or hub. The cable must be connected to port 1. A 10/100/1000 megabit (Mb) Ethernet
connection is supported on the port. Both Internet Protocol Version 4 (IPv4) and Internet
Protocol Version 6 (IPv6) are supported.
Note: For increased availability, an optional second Ethernet connection is supported for
each SAN Volume Controller node.
Ethernet port 1 on every node must be connected to the same set of subnets. The same rule
applies to Ethernet port 2 if it is used. However, the subnets available for Ethernet port 1 do
not have to be the same as configured for interfaces on Ethernet port 2.
Each SAN Volume Controller cluster has a Cluster Management IP address, in addition to a
Service IP address for each node in the cluster. See Example 3-1 for details.
Each node in a SAN Volume Controller clustered system needs to have at least one Ethernet
connection. Both IPv4 and IPv6 addresses are supported. SAN Volume Controller can
operate with either Internet Protocol or with both internet protocols concurrently.
For configuration and management, you must allocate an IP address to the system, which is
referred to as the management IP address. For additional fault tolerance, you can also
configure a second IP address for the second Ethernet port on the node.The addresses must
be fixed addresses. If both IPv4 and IPv6 are operating concurrently, an address is required
for each protocol.
Note: The management IP address cannot be the same as any of the service IPs used.
Chapter 3. Planning 57
Figure 3-1 shows the IP addresses that can be configured on Ethernet ports.
Support for iSCSI enables one additional IPv4 address, IPv6 address, or both for each
Ethernet port on every node. These IP addresses are independent of the system’s
management and service IP addresses.
If you configure management IP on both Ethernet ports, choose one of the IP addresses to
connect to GUI or CLI. Note that the system is not able to automatically fail over the
management IP address to a different port. If one management IP address is unavailable, use
an IP address on the alternate network. Clients might be able to use the intelligence in
domain name servers (DNSs) to provide partial failover.
This section describes several IP addressing plans that you can use to configure SAN Volume
Controller V6.1 and later.
58 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-2 shows the use of the same IPv4 subnet for management and iSCSI addresses.
Figure 3-3 shows the use of two separate IPv4 subnets for management and iSCSI
addresses.
Chapter 3. Planning 59
Figure 3-4 shows the use of redundant networks.
Figure 3-5 shows the use of a redundant network and a third subnet for management.
60 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-6 shows the use of a redundant network for iSCSI data and management.
Chapter 3. Planning 61
The hardware compatible with V8.1 supports 8 Gbps, and 16 Gbps FC fabric, depending on
the hardware platform and on the switch to which the SAN Volume Controller is connected. In
an environment where you have a fabric with multiple-speed switches, the preferred practice
is to connect the SAN Volume Controller and back-end storage systems to the switch
operating at the highest speed.
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
SAN Volume Controller nodes are always deployed in pairs (I/O Groups). An odd number of
nodes in a cluster is a valid standard configuration only if one of the nodes is configured as a
hot spare. However, if there is no hot spare node and a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but the configuration is still
valid.
If possible, avoid communication between nodes that route across ISLs. Connect all nodes to
the same Fibre Channel or FCF switches.
No ISL hops are permitted among the nodes within the same I/O group, except in a stretched
system configuration with ISLs. For more information, see https://ibm.biz/Bdjacf.
However, no more than three ISL hops are permitted among nodes that are in the same
system but in different I/O groups. If your configuration requires more than three ISL hops for
nodes that are in the same system but in different I/O groups, contact your support center.
Avoid ISL on the path between nodes and back-end storage. If possible, connect all storage
systems to the same Fibre Channel or FCF switches as the nodes. One ISL hop between the
nodes and the storage systems is permitted. If your configuration requires more than one ISL,
contact your support center.
In larger configurations, it is common to have ISLs between host systems and the nodes.
To verify the supported connection speed for FC links to the SAN Volume Controller, use IBM
System Storage Interoperation Center (SSIC) site:
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
In an Enhanced Stretched Cluster or HyperSwap setup, the two nodes forming an I/O Group
can be colocated (within the same set of racks), or can be placed in separate racks, separate
rooms, or both. For more information, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
3.6.2 Zoning
In SAN Volume Controller deployments, the SAN fabric must have three distinct zone classes:
SAN Volume Controller cluster system zone: Allows communication between storage
system nodes (intra-cluster traffic).
Host zones: Allows communication between SAN Volume Controller and hosts.
Storage zone: Allows communication between SAN Volume Controller and back-end
storage.
62 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-7 shows the SAN Volume Controller zoning classes.
The subsequent sections contain fundamental rules of SAN Volume Controller zoning.
However, also review the latest zoning guidelines and requirements at the following site when
designing zoning for the planned solution:
https://ibm.biz/BdjGkN
Note: Configurations that use Metro Mirror, Global Mirror, N_Port ID Virtualization, or
long-distance links have extra zoning requirements. Do not follow just the general zoning
rules if you plan to use any of the above.
Create up to two SAN Volume Controller cluster system zones per fabric. In each of them,
place a single port per node designated for intracluster traffic. No more than four ports per
node should be allocated to intracluster traffic. Each node in the system must have at least
two ports with paths to all other nodes in the system. A system node cannot have more than
16 paths to another node in the same system.
Mixed port speeds are not possible for intracluster communication. All node ports within a
clustered system must be running at the same speed.
Chapter 3. Planning 63
Figure 3-8 shows a SAN Volume Controller clustered system zoning example.
1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #
Fabric ID 21 Fabric ID 22
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #
Storwize Family
Note: You can use more than four fabric ports per node to improve peak load I/O
performance. However, if a node receives more than 16 logins from another node, then it
causes node error 860. To avoid that error you need to use zoning, port masking, or a
combination of the two.
For more information, see 3.6.7, “Port designation recommendations” on page 71, 3.6.8,
“Port masking” on page 72, and the IBM SAN Volume Controller documentation at:
https://ibm.biz/BdjmGS
A storage controller can present LUNs to the SAN Volume Controller (as MDisks) and to other
hosts in the SAN. However, if this is the case, it is better to allocate different ports on the
back-end storage for communication with SAN Volume Controller and for hosts traffic.
64 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
All nodes in a system must be able to connect to the same set of storage system ports on
each device. A system that contains any two nodes that cannot connect to the same set of
storage-system ports is considered degraded. In this situation, a system error is logged that
requires a repair action.
This rule can have important effects on a storage system. For example, an IBM DS4000®
series controller can have exclusion rules that determine to which host bus adapter (HBA)
worldwide node names (WWNNs) that a storage partition can be mapped to.
Figure 3-9 shows an example of the SAN Volume Controller, host, and storage subsystem
connections.
Figure 3-9 Example of SAN Volume Controller, host, and storage subsystem connections
Chapter 3. Planning 65
Figure 3-10 shows a storage subsystem zoning example.
1 2 1 2 SVC # 1 2 1 2 SVC #
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
V1
V2
E1 E2
Storwize Family
EMC
²
There might be particular zoning rules governing attachment of specific back-end storage
systems. Review the guidelines at the following website to verify whether you need to
consider additional policies when planning zoning for your back end systems:
https://ibm.biz/Bdjm8H
The preferred zoning policy is to create a separate zone for each host HBA port, and place
exactly one port from each node in each I/O group that the host accesses in this zone. For
deployments with more than 64 hosts defined in the system, this host zoning scheme is
mandatory.
If you plan to use NPIV, review additional host zoning requirements at:
https://ibm.biz/Bdjacb
66 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-11 shows a host zoning example.
P1 P2
Fabric ID 21 Fabric ID 22
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
AC AC
DC DC
SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node
Consider the following rules for zoning hosts with the SAN Volume Controller:
HBA to SAN Volume Controller port zones
Place each host’s HBA in a separate zone with exactly one port from each node in each
I/O group that the host accesses.
It is not prohibited to zone host’s HBA to one port from every node in the cluster, but it will
reduce the maximum number of hosts that can be attached to the system.
Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SAN Volume
Controller ports 1:2 for a total of eight paths.
Here, the term HBA port is used to describe the SCSI initiator and SAN Volume
Controller port to describe the SCSI target.
Important: The maximum number of host paths per LUN must not exceed eight.
Chapter 3. Planning 67
Another way to control the number of paths between hosts and the SAN Volume Controller
is to use port mask. The port mask is an optional parameter of the mkhost and chhost
commands. The port mask configuration has no effect on iSCSI connections.
For each login between a host Fibre Channel port and node Fibre Channel port, the node
examines the port mask for the associated host object. It then determines whether access
is allowed (port mask bit for given port is set) or denied (port mask bit is cleared). If access
is denied, the node responds to SCSI commands as though the HBA WWPN is unknown.
The port mask is 64 bits. Valid mask values range from all 0s (no ports enabled) to all 1s
(all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default
value is all 1s.
Balanced host load across HBA ports
If the host has more than one HBA port per fabric, zone each host port with a separate
group of SAN Volume Controller ports.
Balanced host load across SAN Volume Controller ports
To obtain the best overall performance of the subsystem and to prevent overloading, the
load of each SAN Volume Controller port should be equal. Assuming similar load
generated by each host, you can achieve this balance by zoning approximately the same
number of host ports to each SAN Volume Controller port.
Figure 3-12 on page 69 shows an example of a balanced zoning configuration that was
created by completing the following steps:
1. Divide ports on the I/O Group into two disjoint sets, such that each set contains two ports
from each I/O Group node, each connected to a different fabric.
For consistency, use the same port number on each I/O Group node. The example on
Figure 3-12 on page 69 assigns ports 1 and 4 to one port set, and ports 2 and 3 to the
second set.
Because the I/O Group nodes have four FC ports each, two port sets are created.
2. Divide hosts attached to the I/O Group into two equally numerous groups.
In general, for I/O Group nodes with more than four ports, divide the hosts into as many
groups as you created sets in step 1 on page 68.
3. Map each host group to exactly one port set.
4. Zone all hosts from each group to the corresponding set of I/O Group node ports.
The host connections in the example in Figure 3-12 are defined in the following manner:
– Hosts in group one are always zoned to ports 1 and 4 on both nodes.
– Hosts in group two are always zoned to ports 2 and 3 on both nodes of the I/O Group.
Tip: Create an alias for the I/O Group port set. This step makes it easier to correctly zone
hosts to the correct set of I/O Group ports. Additionally, it also makes host group
membership visible in the FC switch configuration.
The use of this schema provides four paths to one I/O Group for each host, and helps to
maintain an equal distribution of host connections on SAN Volume Controller ports.
Tip: To maximize performance from the host point of view, distribute volumes that are
mapped to each host between both I/O Group nodes.
68 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Hosts 1/3/5/7….255, total 128 Hosts, Zoning:
Fabric A: ( Host _P0;N1_P1;N2_P1 )
IOGRP_0
IIOGRP_
OGRP_0
Fabric B: ( Host _P1;N1_P4;N2_P4 ) IIOGRP_0
OGRP__00
IIOGRP_0
OGRP_00
IOGRP_0
Switch P1
P0
P2
P1
P3
P4
Fabric B
| P4
Host 2/4/6/…. 256, total 128 Hosts, Zoning:
Fabric A: ( Host _P0;N1_P3;N2_P3 )
Fabric B: ( Host _P1;N1_P2;N2_P2 )
When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SAN Volume Controller environment, no more than four paths per
I/O Group are required to accomplish this layout.
All paths must be managed by the multipath driver on the host side. Make sure that the
multipath driver on each server is capable of handling the number of paths required to access
all volumes mapped to the host.
Chapter 3. Planning 69
For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 3-13. You can combine this schema with the previous
four-path zoning schema.
Switch P1
P2
P3
P4
P0
P1 Fabric B
P3
| P4
When designing zoning for a geographically dispersed solution, consider the effect of the
cross-site links on the performance of the local system.
Important: Be careful when you perform the zoning so that ports dedicated for intra-cluster
communication are not used for Host/Storage traffic in the 8-port and 12-port
configurations.
The use of mixed port speeds for intercluster communication can lead to port congestion,
which can negatively affect the performance and resiliency of the SAN. Therefore, it is not
supported.
70 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Important: If you zone two Fibre Channel ports on each node in the local system to two
Fibre Channel ports on each node in the remote system, you will be able to limit the impact
of severe and abrupt overload of the intercluster link on system operations.
If you zone all node ports for intercluster communication and the intercluster link becomes
severely and abruptly overloaded, the local FC fabric can become congested so that no FC
ports on the local SAN Volume Controller nodes can perform local intracluster heartbeat
communication. This situation can, in turn, result in the nodes experiencing lease expiry
events.
In a lease expiry event, a node restarts to attempt to reestablish communication with the
other nodes in the clustered system. If the leases for all nodes expire simultaneously, a
loss of host access to volumes can occur during the restart events.
For more information about zoning best practices, see IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
Additionally, there is a benefit in isolating remote replication traffic to dedicated ports, and
ensuring that any problems that affect the cluster-to-cluster interconnect do not impact all
ports on the local cluster.
Figure 3-14 shows port designations suggested by IBM for 2145-DH8 and 2145-CG8 nodes.
Figure 3-14 Port designation recommendations for isolating traffic on 2145-DH8 and 2145-CG8 nodes
Chapter 3. Planning 71
Figure 3-15 shows the suggested designations for 2145-SV1 nodes.
Figure 3-15 Port designation recommendations for isolating traffic on 2145-SV1 nodes
Note: With 12 or more ports per node, four ports should be dedicated for because-node
traffic. Doing so is especially important when high write data rates are expected as all
writes are mirrored between I/O Group nodes over these ports.
The port designation patterns shown in the tables provide the required traffic isolation and
simplify migrations to configurations with greater number of ports. More complicated port
mapping configurations that spread the port traffic across the adapters are supported and can
be considered. However, these approaches do not appreciably increase availability of the
solution.
Alternative port mappings that spread traffic across HBAs might allow adapters to come back
online following a failure. However, they do not prevent a node from going offline temporarily
to restart and attempt to isolate the failed adapter and then rejoin the cluster. Also, the mean
time between failures (MTBF) of the adapter is not significantly shorter than that of the
non-redundant node components. The presented approach takes all of these considerations
into account with a view that increased complexity can lead to migration challenges in the
future, and a simpler approach is usually better.
There are two Fibre Channel port masks on a system. The local port mask control
connectivity to other nodes in the same system, and the partner port mask control
connectivity to nodes in remote, partnered systems. By default, all ports are enabled for both
local and partner connectivity.
72 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The port masks apply to all nodes on a system. A different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.
A mixed traffic of host, back-end, intracluster, and replication can cause congestion and buffer
to buffer credit exhaustion. This type of traffic can result in heavy degradation of performance
in your storage environment.
Fibre Channel IO ports are logical ports, which can exist on Fibre Channel platform ports or
on FCoE platform ports.
The port mask is a 64-bit field that applies to all nodes in the cluster. In the local FC port
masking, you can set a port to be dedicated to node-to-node/intracluster traffic by setting a 1
to that port. Remote FC port masking allows you to set which ports can be used for replication
traffic by setting 1 to that port. If a port has a 0 in the specific mask, it means no traffic of that
type is allowed. Therefore, in a local FC port map, a 0 means no node-to-node traffic will
happen, and a 0 on the remote FC port masking means that no replication traffic will happen
on that port. Therefore, if a port has a 0 on both local and remote FC port masking, only host
and back-end storage traffic is allowed on it.
If you are using the GUI, click Settings → Network → Fibre Channel Ports. Then, you can
select the use of a port. Setting none means no node-to-node and no replication traffic is
allowed, and only host and storage traffic is allowed. Setting local means only node-to-node
traffic is allowed, and remote means that only replication traffic is allowed. Figure 3-16 shows
an example of setting a port mask on port 1 to Local.
Each SAN Volume Controller node is equipped with up to three onboard Ethernet network
interface cards (NICs), which can operate at a link speed of 10 Mbps, 100 Mbps, or
1000 Mbps. All NICs can be used to carry iSCSI traffic. For optimal performance, use 1 Gbps
links between SAN Volume Controller and iSCSI-attached hosts when the SAN Volume
Controller node’s onboard NICs are used.
Chapter 3. Planning 73
Starting with the SAN Volume Controller 2145-DH8, an optional 10 Gbps 4-port Ethernet
adapter (Feature Code AH12) is available. This feature provides one I/O adapter with four
10 GbE ports and SFP+ transceivers. It can be used to add 10 Gb iSCSI/FCoE connectivity
to the SAN Volume Controller Storage Engine.
Figure 3-17 shows an overview of the iSCSI implementation in the SAN Volume Controller.
iSCSI Initiator Node: iqn.1991-05.com.microsoft:itsoW2008 iSCSI Network Entity, i.e SVC cluster
Both onboard Ethernet ports of a SAN Volume Controller node can be configured for iSCSI.
For each instance of an iSCSI target node (that is, each SAN Volume Controller node), you
can define two IPv4 and two IPv6 addresses or iSCSI network portals:
If the optional 10 Gbps Ethernet feature is installed, you can use them for iSCSI traffic.
All node types that can run SAN Volume Controller V6.1 or later can use the iSCSI feature.
Generally, enable jumbo frames in your iSCSI storage network.
iSCSI IP addresses can be configured for one or more nodes.
iSCSI Simple Name Server (iSNS) addresses can be configured in the SAN Volume
Controller.
Decide whether you implement authentication for the host to SAN Volume Controller iSCSI
communication. The SAN Volume Controller supports the Challenge Handshake
Authentication Protocol (CHAP) authentication methods for iSCSI.
74 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
An introduction to the workings of iSCSI protocol can be found in iSCSI Implementation and
Best Practices on IBM Storwize Storage Systems, SG24-8327.
If you plan to use node’s 1 Gbps Ethernet ports for iSCSI host attachment, dedicate Ethernet
port one for the SAN Volume Controller management and port two for iSCSI use. This way,
port two can be connected to a separate network segment or virtual local area network
(VLAN) dedicated to iSCSI traffic.
Note: Ethernet link aggregation (port trunking) or channel bonding for the SAN Volume
Controller nodes’ Ethernet ports is not supported for the 1 Gbps ports.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems (OS), including AIX, Linux, and
Windows.
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.
Make sure that iSCSI initiators, targets, or both that you plan to use are supported. Use the
following sites for reference:
IBM SAN Volume Controller V8.1 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM Knowledge Center for IBM SAN Volume Controller:
https://ibm.biz/Bdjvhm
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
An alias string can also be associated with an iSCSI node. The alias enables an organization
to associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.
Chapter 3. Planning 75
Note: The cluster name and node name form part of the IQN. Changing any of them might
require reconfiguration of all iSCSI nodes that communicate with the SAN Volume
Controller.
For more information about back-end storage supported for iSCSI connectivity, see these
websites:
IBM Support Information for SAN Volume Controller
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
For more information about supported storage subsystems, see these websites:
IBM Support Information for SAN Volume Controller
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the SAN Volume Controller clustered
system must be connected through SAN switches. Direct connection between the SAN
Volume Controller and the storage controller is not supported.
Enhanced Stretched Cluster configurations have additional requirements and
configuration guidelines. For more information about performance and preferred practices
for the SAN Volume Controller, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
76 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
MDisks within storage pools: V6.1 and later provide for better load distribution across
paths within storage pools.
In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it was possible and even likely that certain paths were more heavily loaded than
others.
Starting with V6.1, the code contains logic that takes into account which MDisks are
provided by which back-end storage systems. Therefore, the code more effectively
distributes active paths based on the storage controller ports that are available.
The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.
If your back-end storage system does not support the SAN Volume Controller round-robin
algorithm, ensure that the number of MDisks per storage pool is a multiple of the number of
storage ports that are available. This approach ensures sufficient bandwidth for the storage
controller, and an even balance across storage controller ports.
In general, configure disk subsystems as though SAN Volume Controller was not used.
However, there might be specific requirements or limitations as to the features usable in the
given back-end storage system when it is attached to SAN Volume Controller. Review the
appropriate section of documentation to verify that your back-end storage is supported and to
check for any special requirements:
https://ibm.biz/Bdjm8H
Chapter 3. Planning 77
3.9 Storage pool configuration
The storage pool is at the center of the many-to-many relationship between the MDisks and
the volumes. It acts as a container of physical disk capacity from which chunks of MDisk
space, known as extents, are allocated to form volumes presented to hosts.
MDisks in the SAN Volume Controller are LUNs that are assigned from the back-end storage
subsystems to the SAN Volume Controller. There are two classes of MDisks: Managed and
unmanaged. An unmanaged MDisk is a LUN that is presented to SVC by back-end storage,
but is not assigned to any storage pool. A managed MDisk is an MDisk that is assigned to a
storage pool. An MDisk can be assigned only to a single storage pool.
SAN Volume Controller clustered system must have exclusive access to every LUN (MDisk) it
is using. Any specific LUN cannot be presented to more than one SAN Volume Controller
cluster. Also, presenting the same LUN to a SAN Volume Controller and a host is not allowed.
One of the basic storage pool parameters is the extent size. All MDisks in the storage pool
have the same extent size, and all volumes that are allocated from the storage pool inherit its
extent size.
The SAN Volume Controller supports extent sizes from 16 mebibytes (MiB) to 8192 MiB. The
extent size is a property of the storage pool and is set when the storage pool is created.
The extent size of a storage pool cannot be changed. If you need to change extent size, the
storage pool must be deleted and a new storage pool configured.
Table 3-2 lists all of the available extent sizes in a SAN Volume Controller and the maximum
managed storage capacity for each extent size.
Table 3-2 Extent size and total storage capacities per system
Extent size (MiB) Total storage capacity manageable per system
0512 2 PiB
1024 4 PiB
2048 8 PiB
4096 16 PiB
8192 32 PiB
78 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When planning storage pool layout, consider the following aspects:
Pool extent size:
– Generally, use 128 MiB or 256 MiB. The IBM Storage Performance Council (SPC)
benchmarks use a 256 MiB extent.
– Pick the extent size and then use that size for all storage pools.
– You cannot migrate volumes between storage pools with different extent sizes.
However, you can use volume mirroring to create copies between storage pools with
different extent sizes.
Storage pool reliability, availability, and serviceability (RAS) considerations:
– The number and size of storage pools affects system availability. Using a larger
number of smaller pools reduces the failure domain in case one of the pools goes
offline. However, increased number of storage pools introduces management
overhead, impacts storage space use efficiency, and is subject to the configuration
maximum limit.
– An alternative approach is to create few large storage pools. All MDisks that constitute
each of the pools should have the same performance characteristics.
– The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data
on it. Do not put MDisks into a storage pool until they are needed.
– Put image mode volumes in a dedicated storage pool or pools.
Storage pool performance considerations:
– It might make sense to create multiple storage pools if you are attempting to isolate
workloads to separate disk drives.
– Create storage pools out of MDisks with similar performance. This technique is the only
way to ensure consistent performance characteristics of volumes created from the
pool.
3.9.1 The storage pool and SAN Volume Controller cache relationship
The SAN Volume Controller uses cache partitioning to limit the potential negative effects that
a poorly performing storage controller can have on the clustered system. The cache partition
allocation size is based on the number of configured storage pools. This design protects
against individual overloaded back-end storage system from filling system write cache and
degrading the performance of the other storage pools. For more information, see Chapter 2,
“System overview” on page 13.
Table 3-3 shows the limit of the write-cache data that can be used by a single storage pool.
1 100%
2 066%
3 040%
4 030%
5 or more 025%
Chapter 3. Planning 79
No single partition can occupy more than its upper limit of write cache capacity. When the
maximum cache size is allocated to the pool, the SAN Volume Controller starts to limit
incoming write I/Os for volumes that are created from the storage pool. That is, the host writes
are limited to the destage rate, on a one-out-one-in basis.
Only writes that target the affected storage pool are limited. The read I/O requests for the
throttled pool continue to be serviced normally. However, because the SAN Volume Controller
is destaging data at a maximum rate that the back-end storage can sustain, read response
times are expected to be affected.
All I/O that is destined for other (non-throttled) storage pools continues as normal.
Every volume is assigned to an I/O Group that defines which pair of SAN Volume Controller
nodes will service I/O requests to the volume.
Important: No fixed relationship exists between I/O Groups and storage pools.
Strive to distribute volumes evenly across available I/O Groups and nodes within the clustered
system. Although volume characteristics depend on the storage pool from which it is created,
any volume can be assigned to any node.
When you create a volume, it is associated with one node of an I/O Group, the preferred
access node. By default, when you create a volume it is associated with the I/O Group node by
using a round-robin algorithm. However, you can manually specify the preferred access node
if needed.
No matter how many paths are defined between the host and the volume, all I/O traffic is
serviced by only one node (the preferred access node).
If you plan to use volume mirroring, for maximum availability put each copy in a different
storage pool backed by different back-end storage subsystems. However, depending on your
needs it might be sufficient to use a different set of physical drives, a different storage
controller, or a different back-end storage for each volume copy. Strive to place all volume
copies in storage pools with similar performance characteristics. Otherwise, the volume
performance as perceived by the host might be limited by the performance of the slowest
storage pool.
Image mode volumes are an extremely useful tool in storage migration and when introducing
IBM SAN Volume Controller to existing storage environment.
80 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.10.2 Planning for thin-provisioned volumes
A thin-provisioned volume has a virtual capacity and a real capacity. Virtual capacity is the
volume storage capacity that a host sees as available. Real capacity is the actual storage
capacity that is allocated to a volume copy from a storage pool. Real capacity limits the
amount of data that can be written to a thin-provisioned volume.
When planning use of thin-provisioned volumes, consider expected usage patterns for the
volume. In particular, the actual size of the data and the rate of data change.
Thin-provisioned volumes require more I/Os because of directory accesses. For fully random
access, and a workload with 70% reads and 30% writes, a thin-provisioned volume requires
approximately one directory I/O for every user I/O. Additionally, thin-provisioned volumes
require more processor processing, so the performance per I/O Group can also be reduced.
However, the directory is two-way write-back-cached (as with the SAN Volume Controller
fastwrite cache), so certain applications perform better.
Additionally, the ability to thin-provision volumes can be a worthwhile tool allowing hosts to
see storage space significantly larger than what is actually allocated within the storage pool.
Thin provisioning can also simplify storage allocation management. You can define virtual
capacity of a thinly provisioned volume to an application based on the future requirements,
but allocate real storage based on today’s use.
The main risk that is associated with using thin-provisioned volumes is running out of real
capacity in the storage volumes, pool, or both and the resultant unplanned outage. Therefore,
strict monitoring of the used capacity on all non-autoexpand volumes, and monitoring of the
free space in the storage pool is required.
When you configure a thin-provisioned volume, you can define a warning level attribute to
generate a warning event when the used real capacity exceeds a specified amount or
percentage of the total virtual capacity. You can also use the warning event to trigger other
actions, such as taking low-priority applications offline or migrating data into other storage
pools.
If a thin-provisioned volume does not have enough real capacity for a write operation, the
volume is taken offline and an error is logged (error code 1865, event ID 060001). Access to
the thin-provisioned volume is restored by increasing the real capacity of the volume, which
might require increasing the size of the storage pool from which it is allocated. Until this time,
the data is held in the SAN Volume Controller cache. Although in principle this situation is not
a data integrity or data loss issue, you must not rely on the SAN Volume Controller cache as a
backup storage mechanism.
Chapter 3. Planning 81
Important: Set and monitor a warning level on the used capacity so that you have
adequate time to respond and provision more physical capacity.
Consider using the autoexpand feature of the thin-provisioned volumes to reduce human
intervention required to maintain access to thin-provisioned volumes.
When you create a thin-provisioned volume, you can choose the grain size for allocating
space in 32 kibibytes (KiB), 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you select
affects the maximum virtual capacity for the thin-provisioned volume. The default grain size is
256 KiB, which is the preferred option. If you select 32 KiB for the grain size, the volume size
cannot exceed 260,000 gibibytes (GiB). The grain size cannot be changed after the
thin-provisioned volume is created.
Generally, smaller grain sizes save space, but require more metadata access, which can
adversely affect performance. If you are not going to use the thin-provisioned volume as a
FlashCopy source or target volume, use 256 KiB to maximize performance. If you are going to
use the thin-provisioned volume as a FlashCopy source or target volume, specify the same
grain size for the volume and for the FlashCopy function. In this situation ideally grain size
should be equal to the typical I/O size from the host.
A thin-provisioned volume feature that is called zero detect provides clients with the ability to
reclaim unused allocated disk space (zeros) when they are converting a fully allocated
volume to a thin-provisioned volume by using volume mirroring.
The SAN Volume Controller imposes no particular limit on the actual distance between the
SAN Volume Controller nodes and host servers. However, for host attachment, the SAN
Volume Controller supports up to three ISL hops in the fabric. This capacity means that the
server to the SAN Volume Controller can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if long wave Small Form-factor Pluggables (SFPs) are used.
82 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-18 shows an example of a supported configuration with SAN Volume Controller
nodes using shortwave SFPs.
In Figure 3-18, the optical distance between SAN Volume Controller Node 1 and Host 2 is
slightly over 40 km (24.85 miles).
To avoid latencies that lead to degraded performance, avoid ISL hops whenever possible. In
an optimal setup, the servers connect to the same SAN switch as the SAN Volume Controller
nodes.
Note: Before attaching host systems to SAN Volume Controller, see the Configuration
Limits and Restrictions for the IBM System Storage SAN Volume Controller described in:
http://www.ibm.com/support/docview.wss?uid=ssg1S1009560
Therefore, for large storage networks you should plan for setting the correct SCSI commands
queue depth on your hosts. For this purpose, a large storage network is defined as one that
contains at least 1000 volume mappings. For example, a deployment with 50 hosts with 20
volumes mapped to each of them would be considered a large storage network. For details of
the queue depth calculations, see this website:
https://ibm.biz/BdjKcK
Chapter 3. Planning 83
3.11.2 Offloaded data transfer
If your Microsoft Windows hosts are configured to use Microsoft Offloaded Data Transfer
(ODX) to offload the copy workload to the storage controller, then consider the benefits of this
technology against additional load on storage controllers. Both benefits and impact of
enabling ODX are especially prominent in Microsoft Hyper-V environments with ODX
enabled.
LUN masking is usually implemented in the device driver software on each host. The host has
visibility of more LUNs than it is intended to use. The device driver software masks the LUNs
that are not to be used by this host. After the masking is complete, only some disks are visible
to the operating system. The system can support this type of configuration by mapping all
volumes to every host object and by using operating system-specific LUN masking
technology. However, the default, and preferred, system behavior is to map only those
volumes that the host is required to access.
The act of mapping a volume to a host makes the volume accessible to the WWPNs or iSCSI
names such as iSCSI qualified names (IQNs) or extended-unique identifiers (EUIs) that are
configured in the host object.
For best performance, split each host group into two sets. For each set, configure the
preferred access node for volumes presented to the host set to one of the I/O Group nodes.
This approach helps to evenly distribute load between the I/O Group nodes.
Note that a volume can be mapped only to a host that is associated with the I/O Group to
which the volume belongs.
84 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14 Advanced Copy Services
The SAN Volume Controller offers the following Advanced Copy Services:
FlashCopy
Metro Mirror
Global Mirror
Layers: A property called layer for the clustered system is used when a copy services
partnership exists between a SAN Volume Controller and an IBM Storwize V7000. There
are two layers: Replication and storage. All SAN Volume Controller clustered systems are
configured as replication layer and cannot be changed. By default, the IBM Storwize V7000
is configured as storage layer. This configuration must be changed by using the chsystem
CLI command before you use it to make any copy services partnership with the SAN
Volume Controller.
Chapter 3. Planning 85
For each volume define which FlashCopy type best fits your requirements:
– No copy
– Full copy
– Thin-Provisioned
– Incremental
Define how many copies you need and the lifetime of each copy.
Estimate the expected data change rate for FlashCopy types other than full copy.
Consider memory allocation for copy services. If you plan to define multiple FlashCopy
relationships, you might need to modify the default memory setting. See 11.2.18, “Memory
allocation for FlashCopy” on page 526.
Define the grain size that you want to use. When data is copied between volumes, it is
copied in units of address space known as grains. The grain size is 64 KB or 256 KB. The
FlashCopy bitmap contains one bit for each grain. The bit records whether the associated
grain has been split by copying the grain from the source to the target. Larger grain sizes
can cause a longer FlashCopy time and a higher space usage in the FlashCopy target
volume. The data structure and the source data location can modify those effects.
If the grain is larger than most host writes, this can lead to write amplification on the target
system. This increase is because for every write IO to an unsplit grain, the whole grain
must be read from the FlashCopy source and copied to the target. Such a situation could
result in performance degradation.
If using a thin-provisioned volume in a FlashCopy map, for best performance use the same
grain size as the map grain size. Additionally, if using a thin-provisioned volume directly
with a host system, use a grain size that more closely matches the host IO size.
Define which FlashCopy rate best fits your requirement in terms of the storage
performance and the amount of time required to complete the FlashCopy. Table 3-4 shows
the relationship of the background copy rate value to the number of grain split attempts per
second.
For performance-sensitive configurations, test the performance observed for different
settings of grain size and FlashCopy rate in your actual environment before committing a
solution to production use. See Table 3-4 for some baseline data.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
86 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14.2 Combining FlashCopy and Metro Mirror or Global Mirror
Use of FlashCopy in combination with Metro Mirror or Global Mirror is allowed if the following
conditions are fulfilled:
A FlashCopy mapping must be in the idle_copied state when its target volume is the
secondary volume of a Metro Mirror or Global Mirror relationship.
A FlashCopy mapping cannot be manipulated to change the contents of the target volume
of that mapping when the target volume is the primary volume of a Metro Mirror or Global
Mirror relationship that is actively mirroring.
The I/O group for the FlashCopy mappings must be the same as the I/O group for the
FlashCopy target volume.
Global Mirror is a copy service that is similar to Metro Mirror but copies data asynchronously.
You do not have to wait for the write to the secondary system to complete. For long distances,
performance is improved compared to Metro Mirror. However, if a failure occurs, you might
lose data.
Global Mirror uses one of two methods to replicate data. Multicycling Global Mirror is
designed to replicate data while adjusting for bandwidth constraints. It is appropriate for
environments where it is acceptable to lose a few minutes of data if a failure occurs. For
environments with higher bandwidth, non-cycling Global Mirror can be used so that less than
a second of data is lost if a failure occurs. Global Mirror also works well when sites are more
than 300 kilometers away.
When SAN Volume Controller copy services are used, all components in the SAN must
sustain the workload that is generated by application hosts and the data replication workload.
Otherwise, the system can automatically stop copy services relationships to protect your
application hosts from increased response times.
Starting with V7.6, you can use the chsystem command to set the maximum replication delay
for the system. This value ensures that the single slow write operation does not affect the
entire primary site.
You can configure this delay for all relationships or consistency groups that exist on the
system by using the maxreplicationdelay parameter on the chsystem command. This value
indicates the amount of time (in seconds) that a host write operation can be outstanding
before replication is stopped for a relationship on the system. If the system detects a delay in
replication on a particular relationship or consistency group, only that relationship or
consistency group is stopped.
In systems with many relationships, a single slow relationship can cause delays for the
remaining relationships on the system. This setting isolates the potential relationship with
delays so that you can investigate the cause of these issues. When the maximum replication
delay is reached, the system generates an error message that identifies the relationship that
exceeded the maximum replication delay.
Chapter 3. Planning 87
To avoid such incidents, consider deployment of a SAN performance monitoring tool to
continuously monitor the SAN components for error conditions and performance problems.
Use of such a tool helps you detect potential issues before they affect your environment.
When planning for use of data replication services, plan for the following aspects of the
solution:
Volumes and consistency groups for copy services
Copy services topology
Choice between Metro Mirror and Global Mirror
Connection type between clusters (FC, FCoE, IP)
Cluster configuration for copy services, including zoning
IBM explicitly tests products for interoperability with the SAN Volume Controller. For more
information about the current list of supported devices, see the IBM System Storage
Interoperation Center (SSIC) website:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
If an application requires write order to be preserved for the set of volumes that it uses, create
a consistency group for these volumes.
Metro Mirror allows you to prevent any data loss during a system failure, but has more
stringent requirements especially regarding intercluster link bandwidth and latency, as well as
remote site storage performance. Additionally it possibly incurs a performance penalty
because writes are not confirmed to the host until data reception confirmation is received
from the remote site. Because of finite data transfer speeds, this remote write penalty grows
with the distance between the sites. A point-to-point dark fiber-based link typically incurs a
round-trip latency of 1 ms per 100 km (62.13 miles). Other technologies provide longer
round-trip latencies. Inter-site link latency defines the maximum possible distance for any
performance level.
Global Mirror allows you to relax constraints on system requirements at the cost of using
asynchronous replication, which allows the remote site to lag behind the local site. Choice of
the replication type has a major impact on all other aspects of the copy services planning.
The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
88 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If you plan to use copy services to realize some application function (for example, disaster
recovery orchestration software), review the requirements of the application you plan to use.
Verify that the complete solution is going to fulfill supportability criteria of both IBM and the
application vendor.
Intercluster link
The local and remote clusters can be connected by an FC, FCoE, or IP network. The IP
network can be used as a carrier for an FCIP solution or as a native data carrier.
Each of the technologies has its own requirements concerning supported distance, link
speeds, bandwidth, and vulnerability to frame or packet loss. For the most current information
regarding requirements and limitations of each of the supported technologies, see this
website:
https://ibm.biz/BdjKbu
The two major parameters of a link are its bandwidth and latency. Latency might limit
maximum bandwidth available over IP links depending on the details of the technology used.
When planning the Intercluster link, take into account the peak performance that is required.
This consideration is especially important for Metro Mirror configurations.
When Metro Mirror or Global Mirror is used a certain amount of bandwidth is required for the
IBM SAN Volume Controller intercluster heartbeat traffic. The amount of traffic depends on
how many nodes are in each of the two clustered systems.
Table 3-5 shows the amount of heartbeat traffic, in megabits per second, that is generated by
various sizes of clustered systems.
2 nodes 5 06 06 06
4 nodes 6 10 11 12
6 nodes 6 11 16 17
8 nodes 6 12 17 21
These numbers estimate the amount of traffic between the two clustered systems when no
I/O is taking place to mirrored volumes. Half of the data is sent by each of the systems. The
traffic is divided evenly over all available intercluster links. Therefore, if you have two
redundant links, half of this traffic is sent over each link.
The bandwidth between sites must be sized to meet the peak workload requirements. You
can estimate the peak workload requirement by measuring the maximum write workload
averaged over a period of 1 minute or less, and adding the heartbeat bandwidth. Statistics
must be gathered over a typical application I/O workload cycle, which might be days, weeks,
or months, depending on the environment on which the SAN Volume Controller is used.
When planning the inter-site link, consider also the initial sync and any future resync
workloads. It might be worthwhile to secure additional link bandwidth for the initial data
synchronization.
Chapter 3. Planning 89
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency requirements are met even
during single failure conditions.
When planning the inter-site link, make a careful note whether it is dedicated to the
inter-cluster traffic or is going to be used to carry any other data. Sharing the link with other
traffic (for example, cross-site IP traffic) might reduce the cost of creating the inter-site
connection and improve link utilization. However, doing so might also affect the links’ ability to
provide the required bandwidth for data replication.
Verify carefully that the devices that you plan to use to implement the intercluster link are
supported.
Cluster configuration
If you configure replication services, you might decide to dedicate ports for intercluster
communication, for the intracluster traffic, or both. In that case, make sure that your cabling
and zoning reflects that decision. Additionally, these dedicated ports are inaccessible for host
or back-end storage traffic, so plan your volume mappings as well as hosts and back-end
storage connections accordingly.
Global Mirror volumes should have their preferred access nodes evenly distributed between
the nodes of the clustered systems. Figure 3-20 shows an example of a correct relationship
between volumes in a Metro Mirror or Global Mirror solution.
The back-end storage systems at the replication target site must be capable of handling the
peak application workload to the replicated volumes, plus the client-defined level of
background copy, plus any other I/O being performed at the remote site. The performance of
applications at the local clustered system can be limited by the performance of the back-end
storage controllers at the remote site. This consideration is especially important for Metro
Mirror replication.
90 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To ensure that the back-end storage is able to support the data replication workload, you can
dedicate back-end storage systems to only Global Mirror volumes. You can also configure the
back-end storage to ensure sufficient quality of service (QoS) for the disks that are used by
Global Mirror. Alternatively, you can ensure that physical disks are not shared between data
replication volumes and other I/O.
For more detailed information about SAN boot, see Appendix B, “CLI setup” on page 769.
Because multiple data migration methods are available, choose the method that best fits your
environment, operating system platform, type of data, and the application’s service level
agreement (SLA).
Chapter 3. Planning 91
You might want to use the SAN Volume Controller as a data mover to migrate data from a
non-virtualized storage subsystem to another non-virtualized storage subsystem. In this
case, you might have to add checks that relate to the specific storage subsystem that you
want to migrate.
Be careful when you are using slower disk subsystems for the secondary volumes for
high-performance primary volumes because the SAN Volume Controller cache might not
be able to buffer all the writes. Flushing cache writes to slower back-end storage might
impact performance of your hosts.
For more information, see Chapter 13, “RAS, monitoring, and troubleshooting” on page 689.
This application currently only supports upgrades from 2145-CF8, 2145-CG8, and 2145-DH8
nodes to SV1 nodes. For more information, see:
https://ports.eu-gb.mybluemix.net/
Tip: Technically, almost all storage controllers provide both striping (in the form of RAID 5,
RAID 6, or RAID 10) and a form of caching. The real benefit of SAN Volume Controller is
the degree to which you can stripe the data across disks in a storage pool, even if they are
installed in different back-end storage systems. This technique maximizes the number of
active disks available to service I/O requests. The SAN Volume Controller provides
additional caching, but its impact is secondary for sustained workloads.
To ensure the performance that you want and verify the capacity of your storage
infrastructure, undertake a performance and capacity analysis to reveal the business
requirements of your storage environment. Use the analysis results and the guidelines in this
chapter to design a solution that meets the business requirements of your organization.
92 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When considering performance for a system, always identify the bottleneck and, therefore,
the limiting factor of a specific system. This is a multidimensional analysis that needs to be
performed for each of your workload patterns. There can be different bottleneck components
for different workloads.
When you are designing a storage infrastructure with the SAN Volume Controller or
implementing a SAN Volume Controller in an existing storage infrastructure, you must ensure
that the performance and capacity of the SAN, back-end disk subsystems and SAN Volume
Controller meets the requirements for the set of known or expected workloads.
3.19.1 SAN
The following SAN Volume Controller models are supported for software V8.1:
2145-DH8
2145-SV1
All of these models can connect to 8 Gbps, and 16 Gbps switches (2 Gbps and 4 Gbps are no
longer supported). Correct zoning on the SAN switch provides both security and
performance. Implement a dual HBA approach at the host to access the SAN Volume
Controller.
The SAN Volume Controller is designed to handle many paths to the back-end storage.
In most cases, the SAN Volume Controller can improve performance, especially of mid-sized
to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk
systems, for the following reasons:
The SAN Volume Controller can stripe across disk arrays, and it can stripe across the
entire set of configured physical disk resources.
The SAN Volume Controller 2145-DH8 has 32 GB of cache (64 GB of cache with a second
CPU used for hardware-assisted compression acceleration for IBM Real-time
Compression (RtC) workloads).The SAN Volume Controller 2145-SV1 has at least 64 GB
(up to 264 GB) of cache.
The SAN Volume Controller can provide automated performance optimization of hot spots
by using flash drives and Easy Tier.
The SAN Volume Controller large cache and advanced cache management algorithms also
allow it to improve the performance of many types of underlying disk technologies. The SAN
Volume Controller capability to asynchronously manage destaging operations incurred by
writes while maintaining full data integrity has the potential to be important in achieving good
database performance.
Chapter 3. Planning 93
Because hits to the cache can occur both in the upper (SAN Volume Controller) and the lower
(back-end storage disk controller) level of the overall system, the system as a whole can use
the larger amount of cache wherever it is located. Therefore, SAN Volume Controller cache
provides additional performance benefits for back-end storage systems with extensive cache
banks.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in enabling sequentially organized data to flow smoothly through the system.
However, SAN Volume Controller cannot increase the throughput potential of the underlying
disks in all cases. Performance benefits depend on the underlying storage technology and the
workload characteristics, including the degree to which the workload exhibits hotspots or
sensitivity to cache size or cache algorithms.
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
Creating a storage pool
Creating volumes
Connecting to or configuring hosts that use storage presented by a SAN Volume
Controller clustered system
For more information about performance and preferred practices for the SAN Volume
Controller, see IBM System Storage SAN Volume Controller and Storwize V7000 Best
Practices and Performance Guidelines, SG24-7521.
Although the technology is easy to implement and manage, it is helpful to understand the
basics of internal processes and I/O workflow to ensure a successful implementation of any
storage solution.
94 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The following are some general suggestions:
Best results can be achieved if the data compression ratio stays at 25% or above. Volumes
can be scanned with the built-in Comprestimator utility to support the decision if RtC is a
good choice for the specific volume.
More concurrency within the workload gives a better result than single-threaded
sequential I/O streams.
I/O is de-staged to RACE from the upper cache in 64 KiB pieces. The best results are
achieved if the host I/O size does not exceed this size.
Volumes that are used for only one purpose usually have the same work patterns. Mixing
database, virtualization, and general-purpose data within the same volume might make
the workload inconsistent. These workloads might have no stable I/O size and no specific
work pattern, and a below-average compression ratio, making these volumes hard to
investigate during performance degradation. Real-time Compression development
advises against mixing data types within the same volume whenever possible.
It is best to not recompress pre-compressed data. Volumes with compressed data should
stay as uncompressed volumes.
Volumes with encrypted data have a very low compression ratio and are not good
candidates for compression. This observation is true for data encrypted by the host.
Real-time Compression might provide satisfactory results for volumes encrypted by SAN
Volume Controller because compression is performed before encryption.
For more information about using IBM Spectrum Control to monitor your storage subsystem,
see this website:
http://www.ibm.com/systems/storage/spectrum/control/
Also, see IBM Spectrum Family: IBM Spectrum Control Standard Edition, SG24-8321.
Chapter 3. Planning 95
96 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4
Additional features such as user authentication, secure communications, and local port
masking are also covered. These features are optional and do not need to be configured
during the initial configuration.
98 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.2 System initialization
This section provides step-by-step instructions on how to create the SVC cluster. The
procedure is performed by using the technician port for 2145-SV1 and 2145-DH8 models.
Attention: Do not repeat the instructions for system initialization on more than one node.
After system initialization completes, use the management GUI to add more nodes to the
system. See 4.3.2, “Adding nodes” on page 115 for information about how to perform this
task.
During system initialization, you must specify either an IPv4 or an IPv6 system address. This
address is given to Ethernet port 1. After system initialization, you can specify additional IP
addresses for port 1 and port 2 until both ports have an IPv4 address and an IPv6 address.
Choose any 2145-SV1 or 2145-DH8 node that you want to be a member of the cluster being
created, and connect a personal computer (PC) or notebook to the technician port on the rear
of the node.
Figure 4-1 shows the location of the technician port on the 2145-SV1 model.
Figure 4-2 shows the location of the technician port on the 2145-DH8 model.
The technician port provides a DHCP IPv4 address. So you must ensure that your PC or
notebook Ethernet port is configured for DHCP if you want the IP to be assigned
automatically. If your PC or notebook does not have DHCP, you can set a static IP on the
Ethernet port as 192.168.0.2.
Note: The SVC does not provide IPv6 IP addresses for the technician port.
Note: During the system initialization, you are prompted to accept untrusted certificates
because the system certificates are self-signed. You can accept these because they
are not harmful.
2. The welcome dialog box opens, as shown in Figure 4-3. Click Next to start the procedure.
100 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Select the first option, As the first node in a new system, as shown in Figure 4-4. Click
Next.
Figure 4-4 System initialization: Configuring the first node in a new system
4. Enter the IP address details for the new system. You can choose between an IPv4 or IPv6
address. In this example an IPv4 address is set, as shown in Figure 4-5. Click Next.
6. After the system initialization is complete, follow the instructions shown in Figure 4-7:
a. Disconnect the Ethernet cable from the technician port and from your PC or notebook.
b. Connect the PC or notebook to the same network as the system.
c. Click Finish to be redirected to the management GUI to complete the system setup.
Note: You can access the management GUI from any management console that is
connected to the same network as the system. Enter the system IP address on a
supported browser to access the management GUI.
102 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.3 System setup
This section provides step-by-step instructions on how to define the basic settings of the
system with the system setup wizard, and on how to add additional nodes and optional
expansion enclosures.
Note: The first time that you connect to the management GUI, you are prompted to accept
untrusted certificates because the system certificates are self-signed. You can accept
these certificate because they are not harmful.
You can install certificates signed by a third-party certificate authority after you complete
system setup. See 4.5, “Configuring secure communications” on page 134 for instructions
on how to perform this task.
Important: The default password for the superuser account is passw0rd (the number
zero and not the letter O).
3. Carefully read the license agreement. Select I agree with the terms in the license
agreement when you are ready, as shown in Figure 4-10. Click Next.
104 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Enter a new password for superuser, as shown in Figure 4-11. The password length is 6 -
64 characters and it cannot begin or end with a space. Click Apply and Next.
106 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. Enter either the number of tebibytes (TiB) or the number of Storage Capacity Units (SCUs)
licensed for each function as authorized by your license agreement. Figure 4-13 shows
some values as an example only.
Note: Encryption uses a different licensing scheme and is activated later in the wizard.
Note: If you choose to manually enter these settings, you cannot select the 24-hour
clock at this time. However, you can select the 24-hour clock after you complete the
wizard by clicking Settings → System and selecting Date and Time.
108 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
8. Select whether the encryption feature was purchased for this system. In this example, it is
assumed encryption was not purchased, as shown in Figure 4-15. Click Next.
Note: If you have purchased the encryption feature, you are prompted to activate your
encryption license either manually or automatically. For information about how to
activate your encryption license during the system setup wizard, see Chapter 12,
“Encryption” on page 633.
Note: If your system is not in the US, complete the state or province field with XX.
110 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.Enter the contact details of the person to be contacted to resolve issues on the system.
You can choose to enter the details for a 24-hour operations desk. Figure 4-17 shows
some details as an example only. Click Apply and Next.
SVC can use SNMP traps, syslog messages, and call home to notify you and IBM Support
when significant events are detected. Any combination of these notification methods can
be used simultaneously. However, only call home is configured during the system setup
wizard. For information about how to configure other notification methods, see Chapter 13,
“RAS, monitoring, and troubleshooting” on page 689.
Note: When call home is configured, the system automatically creates a support
contact with one of the following email addresses, depending on country or region of
installation:
US, Canada, Latin America, and Caribbean Islands: [email protected]
All other countries or regions: [email protected]
If you do not want to configure call home now, it can be done later by navigating to
Settings → Notifications.
112 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.A summary of all the changes is displayed, as shown in Figure 4-19. Confirm that the
changes are correct and click Finish.
13.The message shown in Figure 4-20 opens, confirming that the setup is complete. Click
Close. You are automatically redirected to the management GUI Dashboard.
During system setup, if there is only one node on the fabric that is not part of the cluster, that
node is added automatically. If there is more than one node, no node is added automatically.
Figure 4-22 shows the System window for a system with two nodes and no other nodes
visible on the fabric.
114 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 4-23 shows the System window for a system with one node and seven other nodes
visible in the fabric.
Figure 4-23 System window: One node and seven nodes to add
If you have purchased only two nodes, all nodes are already part of the cluster. If you have
purchased more than two nodes, you must manually add them to the cluster. See 4.3.2,
“Adding nodes” on page 115 for instructions on how to perform this task.
When all nodes are part of the cluster, you can install the optional expansion enclosures. See
4.3.4, “Adding expansion enclosures” on page 121 for instructions about how to perform this
task. If you have no expansion enclosures to install, system setup is complete.
Completing system setup means that all mandatory steps of the initial configuration have
been completed and you can start configuring your storage. Optionally, you can configure
other features, such as user authentication, secure communications, and local port masking.
Before beginning this process, ensure that the new nodes are correctly installed and cabled to
the existing system. Ensure that the Ethernet and Fibre Channel connectivity is correctly
configured and that the nodes are powered on.
116 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Select the nodes that you want to add to each I/O group, as shown in Figure 4-26.
You can turn on the identify LED lights on a node by clicking the icon on the right of the
node, as shown in Figure 4-27.
3. Click Finish and wait for the nodes to be added to the system.
Note: For more information about adding spare nodes to the system, see 13.4.4,
“Updating IBM Spectrum Virtualize with a Hot Spare Node” on page 713.
This procedure is the same whether you are configuring the system for the first time or
expanding it afterward.
Before commencing, ensure that the spare nodes are correctly installed and cabled to the
existing system. Ensure that the Ethernet and Fibre Channel connectivity has been correctly
configured and that the nodes are powered on.
118 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Complete the following steps to add spare nodes to the system:
1. Click Actions and then Add Nodes, as shown in Figure 4-29.
2. Select the nodes that you want to add to the system as hot spares, as shown in
Figure 4-30.
3. Click Finish and wait for the nodes to be added to the system.
The second view window titled Hot Spare displays all spare nodes that are configured for the
system, as shown in Figure 4-32.
120 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.3.4 Adding expansion enclosures
Before continuing, ensure that the new expansion enclosures are correctly installed, cabled to
the existing system, and powered on. If all prerequisites are fulfilled, the Systems window
displays an empty expansion under the two nodes of the I/O group it is attached to, as shown
in Figure 4-33. The plus sign means that there are expansions that are not yet added to the
system.
3. Review the summary in the next dialog box. Click Finish to add the expansions to the
system. The new expansions are now displayed in the Systems window. Figure 4-35
shows a system with eight nodes and two expansion enclosures installed under I/O
group 0.
122 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.4 Configuring user authentication
There are two methods of user authentication to control access to the GUI and to the CLI:
Local authentication is performed within the SVC system. Local GUI authentication is
done with user name and password. Local CLI authentication is done either with an SSH
key or a user name and password.
Remote authentication allows users to authenticate to the system using credentials stored
on an external authentication service. This feature means that you can use the passwords
and user groups defined on the remote service to simplify user management and access,
to enforce password policies more efficiently, and to separate user management from
storage management.
Note: Superuser is the only user allowed to log in to the Service Assistant Tool. It is also
the only user allowed to run sainfo and satask commands through the CLI.
Superuser is a member of the SecurityAdmin user group, which is the most privileged role
within the system.
The password for superuser is set by the user during system setup. The superuser password
can be reset to its default value of passw0rd using the technician port.
User names can contain up to 256 printable American Standard Code for Information
Interchange (ASCII) characters. Forbidden characters are the single quotation mark ('), colon
(:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (“). A user name
cannot begin or end with a blank space.
Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters. However, passwords cannot begin or end with blanks.
Key authentication is attempted first with the password as a fallback. The password and the
SSH key are used for CLI or file transfer access. For GUI access, only the password is used.
Note: Local users are created for each SVC system. If you want to allow access for a user
on multiple systems, you must define the user in each system with the same name and the
same privileges.
Users that are authenticated by an LDAP server can log in to the management GUI and the
CLI. These users do not need to be configured locally for CLI access, nor do they need an
SSH key configured to log in using the CLI.
If multiple LDAP servers are available, you can assign multiple LDAP servers to improve
availability. Authentication requests are processed by those LDAP servers that are marked as
preferred unless the connections fail or a user is not found. Requests are distributed across
all preferred servers for load balancing in a round-robin fashion.
Note: All LDAP servers that are configured within the same system must be of the same
type.
If users that are part of a group on the LDAP server are to be authenticated remotely, a user
group with an identical name must exist on the system. The user group name is case
sensitive. The user group must also be enabled for remote authentication on the system.
A user who is authenticated remotely is granted permissions according to the role that is
assigned to the user group of which the user is a member.
124 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To configure remote authentication using LDAP, start by enabling remote authentication:
1. Click Settings → Security, and select Remote Authentication and then Configure
Remote Authentication, as shown in Figure 4-36.
2. Enter the LDAP settings. Note that these settings are not server specific. They are
common to every server configured. Extra optional settings are available by clicking
Advanced Settings. The following settings are available:
– LDAP type
• IBM Tivoli Directory Server (for IBM Security Directory Server)
• Microsoft Active Directory
• Other (for OpenLDAP)
In this example, we configure an OpenLDAP server, as shown in Figure 4-37 on
page 126.
– Security
Choose between None, SSL, or Transport Layer Security. Using some form of
security ensures that user credentials are encrypted before being transmitted. Select
SSL to use LDAP over SSL (LDAPS) to establish secure connections using port 636
for negotiation and data transfer. Select Transport Layer Security to establish secure
connections using Start TLS, allowing both encrypted and unencrypted connections to
be handled by the same port.
– Service Credentials
This is an advanced and optional setting. Leave Distinguished Name and Password
empty if your LDAP server supports anonymous bind. In this example, we enter the
credentials of an existing user on the LDAP server with permission to query the LDAP
directory. You can enter this information in the format of an email address (for example,
[email protected]) or as a distinguished name (for example,
cn=Administrator,cn=users,dc=ssd,dc=hursley,dc=ibm,dc=com in Figure 4-38).
126 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
– User Attribute
This LDAP attribute is used to determine the user name of remote users. The attribute
must exist in your LDAP schema and must be unique for each of your users.
This is an advanced setting that defaults to sAMAaccountName for Microsoft Active
Directory and to uid for IBM Security Directory Server and OpenLDAP.
– Group Attribute
This LDAP attribute is used to determine the user group memberships of remote users.
The attribute must contain either the distinguished name of a group or a
colon-separated list of group names.
This is an advanced setting that defaults to memberOf for Microsoft Active Directory and
OpenLDAP and to ibm-allGroups for IBM Security Directory Server. For OpenLDAP
implementations, you might need to configure the memberOf overlay if it is not in place.
– Audit Log Attribute
This LDAP is attribute used to determine the identity of remote users. When an LDAP
user performs an audited action, this identity is recorded in the audit log. This is an
advanced setting that defaults to userPrincipalName for Microsoft Active Directory and
to uid for IBM Security Directory Server and OpenLDAP.
3. Enter the server settings for one or more LDAP servers, as shown in Figure 4-39 on
page 128. To add more servers, click the plus (+) icon. The following settings are available:
– Preferred
Authentication requests are processed by the preferred servers unless the connections
fail or a user is not found. Requests are distributed across all preferred servers for load
balancing. Select Preferred to set the server as a preferred server.
– IP Address
The IP address of the server.
– Base DN
The distinguished name to use as a starting point for searching for users on the server
(for example, dc=ssd,dc=hursley,dc=ibm,dc=com).
– SSL Certificate
The SSL certificate that is used to securely connect to the LDAP server. This certificate
is required only if you chose to use SSL or Transport Layer Security as a security
method earlier.
Click Finish to save the settings.
Now that remote authentication is enabled, the remote user groups must be configured. You
can use the default built-in user groups for remote authentication. However, remember that
the name of the default user groups cannot be changed. If the LDAP server already contains
a group that you want to use, the name of the group must be changed on the server side to
match the default name. Any user group, whether default or self-defined, must be enabled for
remote authentication.
Complete the following steps to create a user group with remote authentication enabled:
1. Click Access → Users and select Create User Group, as shown in Figure 4-40.
128 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Enter the details for the new group. Select Enable for this group to enable remote
authentication, as shown in Figure 4-41. Click Create.
130 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Select Enable for this group, as shown in Figure 4-43.
When you have at least one user group enabled for remote authentication, make sure that the
LDAP server is configured correctly by verifying that the following conditions are true:
The name of the user group on the LDAP server matches the one you just modified or
created.
Each user that you want to authenticate remotely is a member of the appropriate user
group for the intended system role.
Figure 4-45 shows the result of a successful connection test. If the connection is not
successful, an error is logged in the event log.
132 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
There is also the option to test a real user authentication attempt. Click Settings →
Security → Remote Authentication, and select Global Actions and then Test LDAP
Authentication, as shown in Figure 4-46.
Enter the user credentials of a user defined on the LDAP server, as shown in Figure 4-47.
Click Test.
Again, the message CMMVC70751 The LDAP task completed successfully is shown after a
successful test.
Both the connection test and the authentication test must complete successfully to ensure
that LDAP authentication works correctly. Assuming both tests succeed, users can log in to
the GUI and CLI using their network credentials.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.
The rights of a user who belongs to a specific user group are defined by the role that is
assigned to the user group. It is the role that defines what a user can or cannot do on the
system.
SVC provides six user groups and seven roles by default, as shown in Table 4-2. The
VasaProvider role is not associated with a default user group.
Note: The VasaProvider role is used to allow VMware to interact with the system when
implementing Virtual Volumes. Avoid using this role for users who are not controlled by
VMware.
SecurityAdmin SecurityAdmin
Administrator Administrator
CopyOperator CopyOperator
Service Service
Monitor Monitor
RestrictedAdmin RestrictedAdmin
- VasaProvider
Signed SSL certificates are issued by a third-party certificate authority. A browser maintains a
list of trusted certificate authorities, identified by their root certificate. The root certificate must
be included in this list in order for the signed certificate to be trusted. If it is not, the browser
presents security warnings.
134 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To see the details of your current system certificate, click Settings → Security and select
Secure Communications, as shown in Figure 4-48.
SVC allows you to generate a new self-signed certificate or to configure a signed certificate.
Attention: Before generating a request, ensure that your current browser does not
have restrictions on the type of keys that are used for certificates. Some browsers limit
the use of specific key-types for security and compatibility issues.
136 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Save the generated request file. The Secure Communications window now mentions that
there is an outstanding certificate request, as shown in Figure 4-50. This is the case until
the associated signed certificate is installed.
Attention: If you need to update a field in the certificate request, you can generate a
new request. However, do not generate a new request after sending the original one to
the certificate authority. Generating a new request overrides the original one and the
signed certificate associated with the original request cannot be installed.
7. You are prompted to confirm the action, as shown in Figure 4-52. Click Yes to proceed.
The signed certificate is installed.
138 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.5.2 Generating a self-signed certificate
Complete the following steps to generate a self-signed certificate:
1. Select Update Certificate on the Secure Communications window.
2. Select Self-signed certificate and enter the details for the new certificate. Key type and
validity days are the only mandatory fields. Figure 4-53 shows some values as an
example.
Attention: Before creating a new self-signed certificate, ensure that your current
browser does not have restrictions on the type of keys that are used for certificates.
Some browsers limit the use of specific key-types for security and compatibility issues.
Click Update.
With Fibre Channel port masking, you control the use of Fibre Channel ports. You can control
whether the ports are used to communicate to other nodes within the same local system, and
if they are used to communicate to nodes in partnered systems. Fibre Channel port masking
does not affect host or storage traffic. It gets applied only to node-to-node communications
within a system and replication between systems.
Note: This section only applies to local port masking. For information about configuring the
partner port mask for intercluster node communications, see 11.6.4, “Remote copy
intercluster communication” on page 550.
The setup of Fibre Channel port masks is useful when you have more than four Fibre
Channel ports on any node in the system because it saves setting up many SAN zones on
your switches. Fibre Channel I/O ports are logical ports, which can exist on Fibre Channel
platform ports or on FCoE platform ports. Using a combination of port masking and fabric
zoning, you can ensure that the number of logins per node is not more than the limit. If a
canister receives more than 16 logins from another node, then it causes node error 860.
The port masks apply to all nodes on a system. A different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.
140 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: The lsfabric command shows all of the paths that are possible in IBM Spectrum
Virtualize (as defined by zoning) independent of their usage. Therefore, the command
output includes paths that will not be used because of port masking.
A port mask is a string of zeros and ones. The last digit in the string represents port one. The
previous digits represent ports two, three, and so on. If the digit for a port is “1”, the port is
enabled and the system attempts to send and receive traffic on that port. If it is “0”, the system
does not send or receive traffic on the port. If there are not sufficient digits in the string to
specifically set a port number, that port is disabled for traffic.
For example, if the local port mask is set to 101101 on a node with eight Fibre Channel ports,
ports 1, 3, 4 and 6 are able to connect to other nodes in the system. Ports 2, 5, 7, and 8 do
not have connections. On a node in the system with only four Fibre Channel ports, ports 1, 3,
and 4 are able to connect to other nodes in the system.
The Fibre Channel ports for the system can be viewed by navigating to Settings → Network
and opening the Fibre Channel Ports menu, as shown in Figure 4-55. Port numbers refer to
the Fibre Channel I/O port IDs.
When replacing or upgrading your node hardware to newer models, consider that the number
of Fibre Channel ports and their arrangement might have changed. Take this possible change
into consideration and ensure that any configured port masks are still valid for the new
configuration.
142 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.6.2 Setting the local port mask
Take the example fabric port configuration shown in Figure 4-57.
The positions in the mask represent the Fibre Channel I/O port IDs with ID 1 in the rightmost
position. In this example, ports A1, A2, A3, A4, B1, B2, B3, and B4 correspond to FC I/O port
IDs 1, 2, 3, 4, 5, 6, 7 and 8.
To set the local port mask, use the chsystem command. For local node-to-node
communication, apply a mask that limits communication to ports A1, A2, A3, and A4 by
applying a port mask of 00001111 to both systems, as shown in Example 4-1.
Example 4-1 Setting a local port mask using the chsystem command
IBM_Storwize:ITSO:superuser>chsystem -localfcportmask 00001111
IBM_Storwize:ITSO:superuser>
144 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5
This chapter explains the basic view and the configuration procedures that are required to get
your IBM SAN Volume Controller environment running as quickly as possible by using GUI.
This chapter does not describe advanced troubleshooting or problem determination and
some of the complex operations (compression, encryption) because they are explained later
in this book.
Throughout the chapter, all GUI menu items are introduced in a systematic, logical order as
they appear in the GUI. However, topics that are described more in detail in other chapters of
the book are not covered in depth and are only referred to here. For example, Pools, Volumes,
Hosts, and Copy Services are described in dedicated chapters that include their associated
GUI operations.
Demonstration: The IBM Client Demonstration Center has a demo of the V8.1 GUI here:
https://www.ibm.com/systems/clientcenterdemonstrations/faces/dcDemoView.jsp?dem
oId=2641
For illustration, the examples configure the IBM SAN Volume Controller (SVC) cluster in a
standard topology.
Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the action that takes effect.
IBM Spectrum Virtualize V8.1 introduced a major change in the GUI design to be aligned with
the unified look and visual style of other IBM products. Also, some specific features and
options to manage SVC have been added and some limited in their variability of attributes.
This chapter highlights these additions and limitations as compared to the previous version of
V7.8.
Important: Data entries that are made through the GUI are case-sensitive.
You must enable Java Script in your browser. For Mozilla Firefox, Java Script is enabled by
default and requires no additional configuration. For more information about configuring
your web browser, go to this website:
https://ibm.biz/BdjKmU
146 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
It is very preferable for each user to have their own unique account. The default user accounts
should be disabled for use or their passwords changed and kept secured for emergency
purposes only. This approach helps to identify personnel working on the systems and track all
important changes done by them. The Superuser account should be used for initial
configuration only.
After a successful login, the V8.1 welcome window shows up with the new system dashboard
(Figure 5-2).
System Health indicates the current status of all critical system components grouped in
three categories: Hardware, logical, and connectivity components. From each group, you
can navigate directly to the section of GUI where the affected component is managed from
(Figure 5-5).
The Dashboard in V8.1 appears as a welcome page instead of the system pane as in from
previous versions. This System overview has been relocated to the menu Monitoring →
System. Although the Dashboard pane provides key information about system behavior, the
System menu is a preferred starting point to obtain the necessary details about your SVC
components. This advice is followed in the next sections of this chapter.
148 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.2 Introduction to the GUI
As shown in Figure 5-6, the former IBM SAN Volume Controller GUI System pane has been
relocated to Monitoring → System.
Performance indicator
In this case, the GUI warns you that no host is defined yet. You can directly perform the task
from this window or cancel it and run the procedure later at any convenient time. Other
suggested tasks that typically appear after the initial system configuration are to create a
volume and configure a storage pool.
The dynamic IBM Spectrum Virtualize menu contains the following panes:
Dashboard
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
150 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Alerts indication
The left icon in the notification area informs administrators about important alerts in the
systems. Click the icon to list warning messages in yellow and errors in red (Figure 5-10).
You can navigate directly to the events menu by clicking View All Events option or see each
event message separately by clicking the Details icon of the specific message, analyze the
content, and eventually run suggested fix procedures (Figure 5-11).
In our case shown in Figure 5-12, we have not yet defined any hosts attached to the systems,
Therefore, the system suggests that we do so and offers us direct access to the associated
host menu. Click Run Task to define the host according to the procedure explained in
Chapter 8, “Hosts” on page 337. If you do not want to define any host at the moment, click
Not Now and the suggestion message disappears.
Similarly, you can analyze the details of running tasks, either all of them together in one
window or of a single task. Click View to open the volume format job as shown in Figure 5-13.
152 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Help
To access online help, click the question mark icon in the left of the notification area and
select the context-based help topic, as shown in Figure 5-14. The help window displays the
context item for the pane that you are working on.
For example, on the System pane, you have the option to open help related to the system in
general as shown in Figure 5-15.
The Help Contents option redirects you to the SVC IBM Knowledge Center. However, it
requires internet access from the workstation where the management GUI is started.
The following content of the chapter helps you to understand the structure of the pane and
how to navigate to various system components to manage them more efficiently and quickly.
Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.
154 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Complete the following steps to use search filtering:
1. Click Filter on the upper-left side of the window, as shown in Figure 5-17, to open the
search box.
2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 5-18. The search option is not
case-sensitive.
4. Remove this filtered view by clicking the Reset Filter icon, as shown in Figure 5-19.
Filtering: This filtering option is available in most menu options of the GUI.
For example, on the Volumes pane, complete the following steps to add a column to the table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns is displayed, as shown in Figure 5-20.
right-click
2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 5-21.
3. You can repeat this process several times to create custom tables to meet your
requirements.
156 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 5-22.
Sorting: By clicking a column, you can sort a table based on that column in ascending or
descending order.
The following section describes each option on the Monitoring menu (Figure 5-24).
158 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When you click a specific component of a node, a pop-up window indicates the details of the
component. By right-clicking and selecting Properties, you see detailed technical attributes,
such as CPU, memory, serial number, node name, encryption status, and node status (online
or offline) as shown in Figure 5-26.
1 3
right-click
In an environment with multiple IBM SAN Volume Controller clusters, you can easily direct the
onsite personnel or technician to the correct device by enabling the identification LED on the
front pane. Click Identify in the window that is shown in Figure 5-27.
right-click
Wait for confirmation from the technician that the device in the data center was correctly
identified.
Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).
Each system that is shown in the System view pane can be rotated by 180° to see its rear
side. Click the rotation arrow in the lower-right corner of the device, as illustrated in
Figure 5-29.
5.4.2 Events
The Events option, available from the Monitoring menu, tracks all informational, warning,
and error messages that occur in the system. You can apply various filters to sort them, or
export them to an external CSV file. A CSV file can be created from the information that is
shown here. Figure 5-30 provides an example of records in the system Event log.
160 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For the error messages with the highest internal priority, perform corrective actions by running
fix procedures. Click the Run Fix button as shown in Figure 5-30 on page 160. The fix
procedure wizard opens as indicated in Figure 5-31.
The wizard guides you through the troubleshooting and fixing process either from a hardware
or software perspective. If you consider that the problem cannot be fixed without a
technician’s intervention, you can cancel the procedure execution at any time. Details about
fix procedures are discussed in Chapter 13, “RAS, monitoring, and troubleshooting” on
page 689.
5.4.3 Performance
The Performance pane reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS, or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 5-32.
The charts that are shown in Figure 5-33 represent 5 minutes of the data stream. For in-depth
storage monitoring and performance statistics with historical data about your SVC system,
use IBM Spectrum Control (enabled by former IBM Tivoli Storage Productivity Center for Disk
and IBM Virtual Storage Center).
You can switch between each type (group) of operation, but you cannot show them all in one
list (Figure 5-35).
162 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.5 Pools
Pools menu option is used to configure and manage storage pools, internal, and external
storage, MDisks, and to migrate old attached storage to the system.
Pools menu contains the following items accessible from GUI (Figure 5-36):
Pools
Volumes by Pool
Internal Storage
External Storage
MDisks by Pool
System Migration
The details about storage pool configuration and management are provided in Chapter 6,
“Storage pools” on page 197.
5.6 Volumes
A volume is a logical disk that the system presents to attached hosts. Using GUI operations,
you can create different types of volumes, depending on the type of topology that is
configured on your system.
Volumes menu contains the following items (Figure 5-37 on page 164):
Volumes
Volumes by Pool
Volumes by Host
Cloud Volumes
The details about all those tasks and guidance through the configuration and management
process are provided in Chapter 7, “Volumes” on page 251.
5.7 Hosts
A host system is a computer that is connected to the system through either a Fibre Channel
interface or an IP network. It is a logical object that represents a list of worldwide port names
(WWPNs) that identify the interfaces that the host uses to communicate with the SVC. Both
Fibre Channel and SAS connections use WWPNs to identify the host interfaces to the
systems.
Additional detailed information about configuration and management of hosts using the GUI is
available in Chapter 8, “Hosts” on page 337.
164 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.8 Copy Services
The IBM Spectrum Virtualize copy services and volumes copy operations are based on the
IBM FlashCopy function. In its basic mode, the function creates copies of content on a source
volume to a target volume. Any data that existed on the target volume is lost and is replaced
by the copied data.
More advanced functions allow FlashCopy operations to occur on multiple source and target
volumes. Management operations are coordinated to provide a common, single point-in-time
for copying target volumes from their respective source volumes. This technique creates a
consistent copy of data that spans multiple volumes.
The IBM SAN Volume Controller Copy Services menu offers the following operations in the
GUI (Figure 5-39):
FlashCopy
Consistency Groups
FlashCopy Mappings
Remote Copy
Partnerships
Because the Copy Services are one of the most important features for resiliency solutions,
study the additional technical details in Chapter 11, “Advanced Copy Services” on page 461.
5.9 Access
The access menu in the GUI maintains who can log in to the system, defines the access
rights for the user, and tracks what has been done by each privileged user to the system. It is
logically split into two categories:
Users
Audit Log
This section explains how to create, modify, or remove user, and how to see records in the
audit log.
5.9.1 Users
You can create local users who can access the system. These user types are defined based
on the administrative privileges that they have on the system.
Local users must provide either a password, a Secure Shell (SSH) key, or both. Local users
are authenticated through the authentication methods that are configured on the system. If
the local user needs access to the management GUI, a password is needed for the user. If
the user requires access to the CLI through SSH, either a password or a valid SSH key file is
necessary. Local users must be part of a user group that is defined on the system. User
groups define roles that authorize the users within that group to a specific set of operations on
the system.
To define your User Group in the IBM SAN Volume Controller, click Access → Users as
shown in Figure 5-41.
The following privilege User group roles exist in the IBM Spectrum Virtualize:
Security Administrator can manage all functions of the systems except tasks associated
with the commands satask and sainfo.
Administrator has full rights in the system except those commands related to user
management and authentication.
Restricted Administrator has the same rights as Administrators except removing
volumes, host mappings, hosts, or pools. This is the ideal option for support personnel.
166 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Copy Operators can start, stop, or pause any FlashCopy-based operations.
Monitor users have access to all viewing operations. They cannot change any value or
parameters of the system.
Service users can set the time and date on the system, delete dump files, add and delete
nodes, apply service, and shut down the system. They have access to all views.
VASA Provider users can manage VMware vSphere Virtual Volumes (VVOLs).
Deleting a user
To remove a user account, select the user in the same menu, click Actions, and select
Delete (Figure 5-43).
right-click
168 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
An example of the audit log is shown in Figure 5-45.
Important: Failed commands are not recorded in the audit log. Commands triggered by
IBM Support personnel are recorded with the flag Challenge because they use
challenge-response authentication.
5.10 Settings
Use the Settings pane to configure system options for notifications, security, IP addresses,
and preferences that are related to display options in the management GUI (Figure 5-46).
The following options are available for configuration from the Settings menu:
Notifications: The system can use Simple Network Management Protocol (SNMP) traps,
syslog messages, and Call Home emails to notify you and the support center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.
Email notifications
The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.
170 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. The Email settings appear as shown in Figure 5-48.
3. This view provides the following useful information about email notification and call-home
information, among others:
– The IP of the email server (SMTP Server) and Port
– The Call-home email address
– The email of one or more users set to receive one or more email notifications
– The contact information of the person in the organization responsible for the system
– System location
To view the SNMP configuration, use the System window. Move the mouse pointer over
Settings and click Notification → SNMP (Figure 5-49).
From this window (Figure 5-49), you can view and configure an SNMP server to receive
various informational, error, or warning notifications by setting the following information:
IP Address
The address for the SNMP server.
Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
To remove an SNMP server, click the Minus sign (-). To add another SNMP server, click
the Plus sign (+).
172 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event. You can use a Syslog pane to view the
Syslog messages that are sent by the SVC. To view the Syslog configuration, use the System
window and move the mouse pointer over the Settings and click Notification → Syslog
(Figure 5-50).
From this window, you can view and configure a syslog server to receive log messages from
various systems and store them in a central repository by entering the following information:
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
The syslog messages can be sent in concise message format or expanded message format.
5.10.2 Network
This section describes how to view the network properties of the IBM SAN Volume Controller
system. The network information can be obtained by Network as shown in Figure 5-51.
174 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Management IP addresses
To view the management IP addresses of IBM Spectrum Virtualize, move your mouse cursor
over Settings → Network and click Management IP Addresses. The GUI shows the
management IP address by moving the mouse cursor over the network ports as shown
Figure 5-52.
Service IP information
To view the Service IP information of your IBM Spectrum Virtualize, move your mouse cursor
over Settings → Network as shown in Figure 5-51 on page 174, and click the Service IP
Address option to view the properties as shown in Figure 5-53.
Instead of reaching the Management IP address, the service IP address directly connects to
each individual node for service operations, for example. You can select a node from the
drop-down list and then click any of the ports that are shown in the GUI. The service IP
address can be configure to support IPv4 or IPv6.
iSCSI information
From the iSCSI pane in the Settings menu, you can display and configure parameters for the
system to connect to iSCSI-attached hosts, as shown in Figure 5-54.
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
To change the system name, click the system name and specify the new name.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.
176 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts that use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
the use of a CHAP.
178 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. From this pane, you can modify the following information:
– Time zone
Select a time zone for your system by using the drop-down list.
– Date and time
The following options are available:
• If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 5-58. You can also click Use Browser Settings to automatically adjust the
date and time of your SVC system with your local workstation date and time.
• If you are using an NTP server, select Set NTP Server IP Address and then enter
the IP address of the NTP server, as shown in Figure 5-59.
4. Click Save.
Licensing
The system supports both differential and capacity-based licensing. For virtualization and
compression functions, differential licensing charges different rates for different types of
storage, which provides cost effective management of capacity across multiple tiers of
storage. Licensing for these functions are based on the number of Storage Capacity Units
(SCUs) purchased. With other functions, like remote mirroring and FlashCopy, the license
grants a specific number of terabytes for that function.
3. In the Licensed Functions pane, you can set the licensing options for the SVC for the
following elements (limits are in TiB):
– External Virtualization
Enter the number of SCU units that are associated to External Virtualization for your
IBM SAN Volume Controller environment.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.
Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.
Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship. Both master volumes and auxiliary volumes are included.
180 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
During system setup, you can activate the license using the authorization code. The
authorization code is sent with the licensed function authorization documents that you
receive after purchasing the license.
Encryption is activated on a per system basis and an active license is required for each
node that uses encryption. During system setup, the system detects the nodes that
support encryption and a license should be applied to each. If additional nodes are
added and require encryption, additional encryption licenses need to be purchased
and activated.
Update System
The update procedure is described in details in Chapter 13, “RAS, monitoring, and
troubleshooting” on page 689.
VVOL management is enabled in SVC in the System section, as shown in Figure 5-61. The
NTP server must be configured before enabling VVOLs management. It is strongly advised to
use the same NTP server for ESXi and for SVC.
Restriction: You cannot enable VVOLs support until the NTP server is configured in SVC.
For a quick-start guide to VVOLs, see Quick-start Guide to Configuring VMware Virtual
Volumes for Systems Powered by IBM Spectrum Virtualize, REDP-5321.
In addition, see Configuring VMware Virtual Volumes for Systems Powered by IBM Spectrum
Virtualize, SG24-8328.
Resources
Use this option to change memory limits for Copy Services and RAID functions per I/O group.
Copy Services features and RAID require that small amounts of volume cache be converted
from cache memory into bitmap memory to allow the functions to operate. If you do not have
enough bitmap space allocated when you try to use one of the functions, you will not be able
to complete the configuration.
Table 5-1 provides an example of the amount of memory that is required for remote mirroring
functions, FlashCopy functions, and volume mirroring.
Remote Copy 256 2 TiB of total Metro Mirror, Global Mirror, or HyperSwap
volume capacity
IP Quorum
Starting with IBM Spectrum Virtualize V7.6, a new feature was introduced for enhanced
stretched systems, the IP Quorum application. Using an IP-based quorum application as the
quorum device for the third site, no Fibre Channel connectivity is required. Java applications
run on hosts at the third site.
182 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To start with IP Quorum, complete the following steps:
1. If your IBM SAN Volume Controller is configured with IP addresses version 4, click
Download IPv4 Application, or select Download IPv6 Application for systems running
with IP version 6. In our example, IPv4 is the option as shown in Figure 5-63.
2. Click Download IPv4 Application and IBM Spectrum Virtualize generates an IP Quorum
Java application as shown in Figure 5-64. The application can be saved and installed in a
host that is to run the IP quorum application.
3. On the host, you must use the Java command line to initialize the IP quorum application.
Change to the folder where the application is located and run java -jar ip_quorum.jar.
I/O Groups
For ports within an I/O group, you can enable virtualization of Fibre Channel ports that are
used for host I/O operations. With N_Port ID virtualization (NPIV), the Fibre Channel port
consists of both a physical port and a virtual port. When port virtualization is enabled, ports
do not come up until they are ready to handle I/O, which improves host behavior around node
unpends. In addition, path failures due to an offline node are masked from hosts.
The target port mode on the I/O group indicates the current state of port virtualization:
Enabled: The I/O group contains virtual ports that are available to use.
Disabled: The I/O group does not contain any virtualized ports.
The port virtualization settings of I/O groups are available by clicking Settings → System →
I/O Groups, as shown in Figure 5-65.
You can change the status of the port by right-clicking the wanted I/O group and selecting
Change Target Port as indicated in Figure 5-66.
right-click
184 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To view and configure DNS server information in IBM Spectrum Virtualize, complete the
following steps:
1. In the left pane, click the DNS icon and enter the IP address and the Name of each DNS
server. The IBM Spectrum Virtualize supports up two DNS Servers, IPv4 or IPv6. See
Figure 5-67.
With transparent cloud tiering, administrators can move older data to cloud storage to free up
capacity on the system. Point-in-time snapshots of data can be created on the system and
then copied and stored on the cloud storage. An external cloud service provider manages the
cloud storage, which reduces storage costs for the system. Before data can be copied to
cloud storage, a connection to the cloud service provider must be created from the system.
A cloud account is an object on the system that represents a connection to a cloud service
provider by using a particular set of credentials. These credentials differ depending on the
type of cloud service provider that is being specified. Most cloud service providers require the
host name of the cloud service provider and an associated password, and some cloud
service providers also require certificates to authenticate users of the cloud storage.
Public clouds use certificates that are signed by well-known certificate authorities. Private
cloud service providers can use either self-signed certificate or a certificate that is signed by a
trusted certificate authority. These credentials are defined on the cloud service provider and
passed to the system through the administrators of the cloud service provider. A cloud
account defines whether the system can successfully communicate and authenticate with the
cloud service provider by using the account credentials.
If the system is authenticated, it can then access cloud storage to either copy data to the
cloud storage or restore data that is copied to cloud storage back to the system. The system
supports one cloud account to a single cloud service provider. Migration between providers is
not supported.
Each cloud service provider requires different configuration options. The system supports the
following cloud service providers:
IBM Bluemix® (also known as SoftLayer Object Storage)
OpenStack Swift
Amazon S3
To view your IBM Spectrum Virtualize cloud provider settings, from the SVC Settings pane,
move the pointer over Settings and click System, then select Transparent Cloud Tiering,
as shown in Figure 5-68.
Using this view, you can enable and disable features of your Transparent Cloud Tiering and
update the system information concerning your cloud service provider. This pane allows you
to set a number of options:
Cloud service provider
Object Storage URL
The Tenant or the container information that is associated to your cloud object storage
User name of the cloud object account
API Key
The container prefix or location of your object
Encryption
Bandwidth
For detailed instructions about how to configure and enable Transparent Cloud Tiering, see
11.4, “Implementing Transparent Cloud Tiering” on page 531.
186 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.10.4 Support menu
Use the Support pane to configure and manage connections and upload support packages to
the support center.
The menus are available under Settings → Support as shown in Figure 5-69.
More details about how the Support menu helps with troubleshooting of your system or how
to make a backup of your systems are provided in 13.7.3, “Remote Support Assistance” on
page 734.
Login Message
IBM Spectrum Virtualize V7.6 and later enables administrators to configure the welcome
banner (login message). This is a text message that appears either in the GUI login window
or at the CLI login prompt.
The content of the welcome message is helpful when you need to notify users about some
important information about the system, such as security warnings or a location description.
To define and enable the welcome message by using the GUI, edit the text area with the
message content and click Save (Figure 5-71).
188 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The result of the action before is shown in Figure 5-72. The system shows the welcome
message in the GUI before login.
General settings
The General Settings menu allows the user to refresh the GUI cache, to set the low graphics
mode option, and to enable advanced pools settings.
Complete the following steps to view and configure general GUI preferences:
1. From the SVC Settings window, move the pointer over Settings and click GUI
Preferences (Figure 5-74).
When you choose a name for an object, the following rules apply:
Names must begin with a letter.
Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.
190 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a volume
called ABC and an MDisk called ABC, but you cannot have two volumes that are called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
To rename the system from the System window, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System pane, as shown in Figure 5-75.
2. The Rename System pane opens (Figure 5-76). Specify a new name for the system and
click Rename.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.
3. Click Yes.
Warning: When you rename your system, the iSCSI name (IQN) automatically changes
because it includes system name by default. Therefore, this change needs additional
actions on iSCSI-attached hosts.
3. Enter the new name of the node and click Rename (Figure 5-78).
Warning: Changing the SVC node name causes an automatic IQN update and requires
the reconfiguration of all iSCSI-attached hosts.
Renaming sites
The SVC supports configuration of site settings that describe the location of the nodes and
storage systems that are deployed in a stretched system configuration. This site information
configuration is only part of the configuration process for enhanced systems. The site
information makes it possible for the SVC to manage and reduce the amount of data that is
transferred between the two sides of the system, which reduces the costs of maintaining the
system.
Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. site1 and site2 are the two sites that make up the two halves of the enhanced
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.
192 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To rename the sites, complete these steps:
1. On the System pane, select Actions in the upper-left corner.
2. The Actions menu opens. Select Rename Sites, as shown in Figure 5-79.
3. The Rename Sites pane with the site information opens, as shown in Figure 5-80.
2. The wizard opens informing you about options to change topology to either Stretched
cluster or HyperSwap (Figure 5-82).
3. The system requires a definition of three sites: Primary, Secondary, and Quorum site.
Assign reasonable names to sites for easy identification as shown in our example
(Figure 5-83).
194 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Choose the wanted topology. While Stretched Cluster is optimal for Disaster Recovery
solutions with asynchronous replication of primary volumes, HyperSwap is ideal for high
availability solutions with near-real-time replication. In our case, we decided on a
Stretched System (Figure 5-84).
5. Assign hosts to one of the sites as primary. Right-click each host and modify sites for them
one by one (Figure 5-85). Assign primary sites also to offline hosts as they might be just
down for maintenance or any other reason.
6. Similarly, assign backend storage to sites from where the primary volumes will be
provisioned (that is, where the hosts are primarily located) (Figure 5-86). At least one
storage device must be assigned to the site planned for Quorum volumes.
8. The SVC creates a set of commands based on input from the wizard and eventually
switches the topology to the entered configuration (Figure 5-88).
As a validation step, verify that all hosts have the correctly mapped and active online volumes
and no error appears in the event log.
Detailed information about resilient solutions with your IBM SAN Volume Controller
environment is available in IBM Spectrum Virtualize and SAN Volume Controller Enhanced
Stretched Cluster with VMware, SG24-8211, and for HyperSwap in IBM Storwize V7000,
Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317.
196 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6
Figure 6-1 provides an overview of how storage pools, MDisks, and volumes are related. This
pane is available by browsing to Monitoring → System and clicking the Overview button on
the upper-right corner of the pane. In the example in Figure 6-1, the system has four LUs from
internal disks arrays, no LUs from external storage, four storage pools, and 93 defined
volumes, mapped to four hosts.
SVC organizes storage into pools to ease storage management and make it more efficient.
All MDisks in a pool are split into extents of the same size and volumes are created out of the
available extents. The extent size is a property of the storage pool and cannot be changed
after the pool is created. It is possible to add MDisks to an existing pool to provide additional
extents.
Storage pools can be further divided into subcontainers that are called child pools. Child
pools inherit the properties of the parent pool (extent size, throttle, reduction feature) and can
also be used to provision volumes.
198 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Storage pools are managed either by using the Pools pane or the MDisks by Pool pane. Both
panes allow you to run the same actions on parent pools. However, actions on child pools can
be performed only through the Pools pane. To access the Pools pane, click Pools → Pools,
as shown in Figure 6-2.
The pane lists all storage pools available in the system. If a storage pool has child pools, you
can toggle the sign to the left of the storage pool icon to either show or hide the child pools.
Figure 6-4 Option to create a storage pool in the MDisks by Pools pane
All alternatives open the dialog box that is shown in Figure 6-5.
200 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Every storage pool that is created using the GUI has a default extent size of 1 GB. The size of
the extent is selected at creation time and cannot be changed later. If you want to specify a
different extent size, browse to Settings → GUI Preferences and select Advanced pool
settings, as shown in Figure 6-6.
If encryption is enabled, you can additionally select whether the storage pool is encrypted, as
shown in Figure 6-8.
Note: The encryption setting of a storage pool is selected at creation time and cannot be
changed later. By default, if encryption is enabled, encryption is selected. For more
information about encryption and encrypted storage pools, see Chapter 12, “Encryption”
on page 633.
202 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Enter the name for the pool and click Create. The new pool is created and is included in the
list of storage pools with zero bytes, as shown in Figure 6-9.
Naming rules: When you choose a name for a pool, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume named ABC and an MDisk called ABC, but you cannot have two volumes called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Modify Threshold
The storage pool threshold refers to the percentage of storage capacity that must be in use
for a warning event to be generated. When using thin-provisioned volumes that auto-expand
(automatically use available extents from the pool), monitor the capacity usage and get
warnings before the pool runs out of free extents, so you can add storage. If a
thin-provisioned volume does not have sufficient extents to expand, it goes offline and a 1865
error is generated.
204 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The threshold can be modified by selecting Modify Threshold and entering the new value,
as shown in Figure 6-12. The default threshold is 80%. Warnings can be disabled by setting
the threshold to 0%.
The threshold is visible in the pool properties and is indicated with a red bar as shown on
Figure 6-13.
Add storage
Selecting Add Storage starts the wizard to assign storage to the pool. For a detailed
description of this wizard, see 6.2.1, “Assigning managed disks to storage pools” on
page 214.
Edit Throttle
When clicking this option, a new window opens allowing you to set the Pool’s throttle.
Throttles can be defined for storage pools to control I/O operations on storage systems.
Storage pool throttles can be used to avoid overwhelming the storage system (either external
storage or internal storage) and be used with virtual volumes. Because virtual volumes use
child pools, and throttle limit for the child pool can control the I/O operations from that virtual
volume. Parent and child pool throttles are independent of each other so a child pool can
have higher throttle limits than its parent pool. See 6.1.3, “Child storage pools” on page 208
for information about child pools.
If more than one throttle applies to an I/O operation, the lowest and most stringent throttle is
used. For example, if a throttle of 200 MBps is defined on a pool and 100 MBps throttle is
defined on a Volume of that pool, then the I/O operations are limited to 100 MBps.
Note: The storage pool throttle objects for a child pool and a parent pool work
independently of each other.
A child pool throttle is independent of its parent pool throttle. However, volumes of that child
pool inherit the throttle from the pool they are in. In the example on Figure 6-15,
T3_SASNL_child has a throttle of 200 MBps defined, its parent pool, T3_SASNL has a throttle of
100 MBps, and Volume TEST_ITSO has one of 1000 IOps. If the workload applied on the
Volume is greater than 200 MBps, then it will be capped by the T3_SASNL_child throttle.
206 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Delete
A storage pool can be deleted using the GUI only if no volumes are associated with it.
Selecting Delete deletes the pool immediately without any additional confirmation.
Note: If there are volumes in the pool, Delete cannot be selected. If that is the case, either
delete the volumes or move them to another storage pool before proceeding. To move a
volume, you can either migrate it or use volume mirroring. For information about volume
migration and volume mirroring, see Chapter 7, “Volumes” on page 251.
Properties
Selecting Properties displays information about the storage pool. Additional information is
available by clicking View more details and by hovering over the elements on the window, as
shown in Figure 6-16.
Unlike a parent pool, a child pool does not contain MDisks. Its capacity is provided exclusively
by the parent pool in the form of extents. The capacity of a child pool is set at creation time,
but can be modified later nondisruptively. The capacity must be a multiple of the parent pool
extent size and must be smaller than the free capacity of the parent pool.
Child pools are useful when the capacity allocated to a specific set of volumes must be
controlled. For example, child pools can be used with VMware vSphere Virtual Volumes
(VVols). Storage administrators can restrict access of VMware administrators to only a part of
the storage pool and prevent volumes creation from affecting the rest of the parent storage
pool.
Child pools can also be useful when strict control over thin-provisioned volumes expansion is
needed. You could, for example, create a child pool with no volumes in it that would act as an
emergency set of extents. That way, if the parent pool ever runs out of free extent, you can
use the ones from the child pool.
Child pools can also be used when a different encryption key is needed for different sets of
volumes.
Child pools inherit most properties from their parent pools, and these cannot be changed. The
inherited properties include the following:
Extent size
Easy Tier setting
Encryption setting, but only if the parent pool is encrypted
Note: For information about encryption and encrypted child storage pools, see Chapter 12,
“Encryption” on page 633.
208 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating a child storage pool
To create a child pool, browse to Pools → Pools, right-click the parent pool that you want to
create a child pool from, and select Create Child Pool, as shown in Figure 6-17.
When the dialog window opens, enter the name and capacity of the child pool and click
Create, as shown in Figure 6-18.
To select an action, right-click the child storage pool, as shown in Figure 6-20. Alternatively,
select the storage pool and click Actions.
210 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Resize
Selecting Resize allows you to increase or decrease the capacity of the child storage pool, as
shown in Figure 6-21. Enter the new pool capacity and click Resize.
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a child
pool needs to be larger than the capacity used by its volumes.
When the child pool is shrunk, the system resets the warning threshold and issues a warning
if the threshold is reached.
Delete
Deleting a child pool is a task quite similar to deleting a parent pool. As with a parent pool, the
Delete action is disabled if the child pool contains volumes, as shown in Figure 6-22.
After deleting a child pool, the extents that it occupied return to the parent pool as free
capacity.
Volumes migration
To move a volume to another pool, you can use migration or volume mirroring in the same
way you use them for parent pools. For information about volume migration and volume
mirroring, see Chapter 7, “Volumes” on page 251.
In the example on Figure 6-23, volume TEST ITSO has been created in child pool
T3_SASNL_child. Note that child pools appear exactly like parent pools in the Volumes by Pool
pane.
212 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A volume from a child pool can only be migrated to its parent pool or to another child pool of
the same parent pool. As shown on Figure 6-24, the volume TEST ITSO can only be migrated
to its parent pool (T3_SASNL) or a child pool with same parent pool (T3_SASNL_child0). This
migration limitation does not apply to volumes belonging to parent pools.
During a volume migration within a parent pool (between a child and its parent or between
children with same parent), there is no data movement but only extent reassignments.
Arrays are created from internal storage using RAID technology to provide redundancy and
increased performance. The system supports two types of RAID: Traditional RAID and
distributed RAID. Arrays are assigned to storage pools at creation time and cannot be moved
between storage pools. You cannot have an array that does not belong to any storage pool.
MDisks are managed by using the MDisks by Pools pane. To access the MDisks by Pools
pane, browse to Pools → MDisks by Pools, as shown in Figure 6-25.
The pane lists all the MDisks available in the system under the storage pool to which they
belong. Both arrays and external MDisks are listed.
Additionally, external MDisks can be managed through the External Storage pane. To access
the External Storage pane, browse to Pools → External Storage.
For more information about IBM Easy Tier feature, see Chapter 10, “Advanced features for
storage efficiency” on page 407.
214 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: When Easy Tier is turned on for a pool, movement of extents between tiers of
storage (inter-tier) or between MDisks within a single tier (intra-tier) is based on the activity
that is monitored. Therefore, when adding an MDisk to a pool, extent migration will not be
performed immediately. No migration of extents will occur until there is sufficient activity to
trigger it.
If balancing of extents within the pool is needed immediately after the MDisks are added,
then a manual extents placement is needed. Because this manual process can be quite
complex, IBM provides a script available here:
https://www.ibm.com/marketing/iwm/iwm/web/preLogin.do?source=swg-SVCTools
This script provides a solution to the problem of rebalancing the extents in a pool after a
new MDisk has been added. The script uses available free space to shuffle extents until
the number of extents from each volume on each MDisk is directly proportional to the size
of the MDisk.
To assign MDisks to a storage pool, navigate to Pools → MDisks by Pools and choose one
of the following options:
Option 1: Select Add Storage on the right side of the storage pool, as shown in
Figure 6-26. The Add Storage button is shown only when the pool has no capacity
assigned or when the pool capacity usage is over the warning threshold.
Option 2: Right-click the pool and select Add Storage, as shown in Figure 6-27.
Both options 1 and 2 start the configuration wizard shown in Figure 6-29. If no external
storage is attached, the External option is not shown. If internal is chosen, then the system
guides you through MDisks creation. If external is selected, then MDisks are already created
and the systems guides you through the selection of external storage. Option 3 allows you to
select the pool you want to add new MDisks to.
The Quick internal configuration option of assigning storage to a pool guides the user into the
steps of creating one or many MDisks and then affects them to the selected pool. Because it
is possible to assign multiple MDisks at the same time during this process, or because the
existing pool has an already configured set of MDisks, compatibility checks are done by the
system when it creates the new MDisks.
For example, if you have a set of 10K RPM drives and another set of 15K RPM drives
available, you cannot place an MDisk made of 10K RPM drives and an MDisk made of 15K
RPM disks to the same pool. You would need to create two separate pools.
216 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Selecting Quick internal automatically defaults parameters such as stripe width, number of
spares (for traditional RAID), number of rebuild areas (for distributed RAID), and number of
drives of each class. The number of drives is the only value that can be adjusted when
creating the array. Depending on the number of drives selected for the new array, the RAID
level automatically adjusts.
For example, if you select two drives only, the system will automatically create a RAID-10
array, with no spare drive. For more control of the array creation steps, you can select the
Internal Custom option. For more information, see “Advanced internal configuration” on
page 218.
By default, if there are enough candidate drives, the system recommends traditional arrays for
most new configurations of MDisks. However, use Distributed RAID when possible, with the
Advanced Internal Custom option. For information about traditional and Distributed RAID,
see 6.2.2, “Traditional and distributed RAID” on page 220. Figure 6-30 shows an example of a
Quick internal configuration.
Figure 6-30 Quick internal configuration: Pool with a single class of drives
If the system has multiple drives classes (like Flash and Enterprise disks for example), the
default option is to create multiple arrays of different tiers and assign them to the pool to take
advantage of the Easy Tier functionality. However, this configuration can be adjusted by
setting the number of drives of different classes to zero. For information about Easy Tier see
Chapter 10, “Advanced features for storage efficiency” on page 407.
When you are satisfied with the configuration presented, click Assign. The RAID arrays, or
MDisks, are then created and start initializing in the background. The progress of the
initialization process can be monitored by selecting the correct task under Running Tasks in
the upper-right corner, as shown in Figure 6-31. The array is available for I/O during this
process.
By clicking View in the Running tasks list, you can see the initialization progress and the time
remaining as shown in Figure 6-32. Note that the array creation depends on the type of drives
it is made of. Initializing an array of Flash drives will be much quicker than with NL-SAS drives
for example.
218 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-33 shows an example with nine drives ready to be configured as DRAID 6, with the
equivalent of one drive capacity of spare (distributed over the nine disks).
Figure 6-33 Adding internal storage to a pool using the Advanced option
To return to the default settings, click the Refresh button next to the pool capacity. To create
and assign the arrays, click Assign.
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it
to a storage pool because this action deletes the data on the MDisk. Use Import instead.
See “Import” on page 230 for information about this action.
Note: Use Distributed RAID whenever possible. The distributed configuration dramatically
reduces rebuild times and decreases the exposure volumes have to the extra load of
recovering redundancy.
Traditional RAID
In a traditional RAID approach, whether it is RAID10, RAID5, or RAID6, data is spread among
drives in an array. However, the spare space is constituted by spare drives, which are global
and sit outside of the array. When one of the drives within the array fails, all data is read from
the mirrored copy (for RAID10), or is calculated from remaining data stripes and parity (for
RAID5 or RAID6), and written to one single spare drive.
220 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-35 shows a traditional RAID6 array with two global spare drives, and data and parity
striped among five drives.
If a drive fails, data is calculated from the remaining strips in a stripe and written to the spare
strip in the same stripe on a spare drive, as shown in Figure 6-36.
Distributed RAID also has the ability to distribute data and parity strips among more drives
than traditional RAID. This feature means more drives can be used to create one array,
improving performance of a single managed disk.
Figure 6-37 shows a distributed RAID6 array with the stripe width of five distributed among
10 physical drives. The reserved spare space is marked as yellow and is equivalent to two
spare drives. Both distributed RAID5 and distributed RAID6 divide the physical drives into
rows and packs. The row has the size of the array width and has only one stripe from each
drive in an array. A pack is a group of several continuous rows, and its size depends on the
number of strips in a stripe.
222 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In case of a drive failure, all data is calculated using the remaining data stripes and parities
and written to a spare space within each row, as shown in Figure 6-38.
The following are the minimum number of drives needed to build a Distributed Array:
Six drives for a Distributed RAID6 array
Four drives for a Distributed RAID5 array
To choose an action, select the array (MDisk) and click Actions, as shown in Figure 6-39.
Alternatively, right-click the array.
Swap drive
Selecting Swap Drive allows the user to replace a drive in the array with another drive. The
other drive needs to have use of Candidate or Spare. This action can be used to replace a
drive that is expected to fail soon, for example, as indicated by an error message in the event
log.
224 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-40 shows the dialog box that opens. Select the member drive to be replaced and the
replacement drive, and click Swap.
The exchange of the drives starts running in the background. The volumes on the affected
MDisk remain accessible during the process.
Figure 6-41 If there are insufficient spare drives available, an error 1690 is logged
Delete
Selecting Delete removes the array from the storage pool and deletes it.
Remember: An array or an MDisk does not exist outside of a storage pool. Therefore, an
array cannot be removed from the pool without being deleted.
226 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If there are no volumes using extents from this array, the command runs immediately without
additional confirmation. If there are volumes using extents from this array, you are prompted
to confirm the action, as shown in Figure 6-42.
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool. After the action completes, the array is removed from the storage pool and
deleted. When an MDisk is deleted from a storage pool, extents in use are migrated to
MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in
that tier, extents from the other tier are used.
Note: Ensure that you have enough available capacity remaining in the storage pool to
allocate the data being migrated from the removed array, or else the command will fail.
Dependent Volumes
Volumes are entities made of extents from a storage pool. The extents of the storage pool
come from various MDisks. A volume can then be spread over multiple MDisks, and MDisks
can serve multiple volumes. Clicking the Dependent Volumes Action menu of an MDisk lists
the volumes that are depending on that MDisk as shown in Figure 6-43.
228 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To choose an action, right-click the external MDisk, as shown in Figure 6-45. Alternatively,
select the external MDisk and click Actions.
Assign
This action is available only for unmanaged MDisks. Selecting Assign opens the dialog box
that is shown in Figure 6-46. This action is equivalent to the wizard described in Quick
external configuration, but acts only on the selected MDisk or MDisks.
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it
to a storage pool because this action deletes the data on the MDisk. Use Import instead.
For information about storage tiers and their importance, see Chapter 10, “Advanced features
for storage efficiency” on page 407.
Modify encryption
This option is available only when encryption is enabled. Selecting Modify Encryption allows
the user to modify the encryption setting for the MDisk, as shown in Figure 6-48.
For example, if the external MDisk is already encrypted by the external storage system,
change the encryption state of the MDisk to Externally encrypted. This setting stops the
system from encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
For information about encryption, encrypted storage pools, and self-encrypting MDisks, see
Chapter 12, “Encryption” on page 633.
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk allows
the user to preserve the data on the MDisk, either by migrating the data to a new volume or by
keeping the data on the external system.
230 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: This is the preferred method to migrate data from legacy storage to the SVC. When
an MDisk presents as imported, the data on the original disks is not modified. The system
acts as a pass-through and the extents of the imported MDisk do not contribute to storage
pools.
Selecting Import allows you to choose one of the following migration methods:
Import to temporary pool as image-mode volume does not migrate data from the
source MDisk. It creates an image-mode volume that has a direct block-for-block
translation of the MDisk. The existing data is preserved on the external storage system,
but it is also accessible from the SVC system.
If this method is selected, the image-mode volume is created in a temporary migration
pool and presented through the SVC. Choose the extent size of the temporary pool and
click Import, as shown in Figure 6-49.
The MDisk is imported and listed as an image mode MDisk in the temporary migration
pool, as shown in Figure 6-50.
The image-mode volume can then be mapped to the original host. The data is still
physically present on the physical disk of the original external storage controller system
and no automatic migration process is currently running. The original host sees no
difference and the applications can continue to run. The image-mode Volume can now be
handled by Spectrum Virtualize. If needed, the image-mode volume can be migrated
manually to another storage pool by using volume migration or volume mirroring later.
232 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The data migration begins automatically after the MDisk is imported successfully as an
image-mode volume. You can check the migration progress by clicking the task under
Running Tasks, as shown in Figure 6-53.
After the migration completes, the volume is available in the chosen destination pool, as
shown in Figure 6-54. This volume is no longer an image-mode volume. Instead, it is a
normal striped volume.
Figure 6-54 The migrated MDisk is now a Volume in the selected pool
At this point, all data has been migrated off the source MDisk and the MDisk is no longer in
image mode, as shown in Figure 6-55. The MDisk can be removed from the temporary
pool. It returns in the list of external MDisks and can be used as a regular MDisk to host
volumes, or the legacy storage bay can be decommissioned.
Include
The system can exclude an MDisk with multiple I/O failures or persistent connection errors
from its storage pool to ensure that these errors do not interfere with data access. If an MDisk
has been automatically excluded, run the fix procedures to resolve any connection and I/O
failure errors. Drives used by the excluded MDisk with multiple errors might require replacing
or reseating.
After the problems have been fixed, select Include to add the excluded MDisk back into the
storage pool.
Remove
In some cases, you might want to remove external MDisks from storage pools to reorganize
your storage allocation. Selecting Remove removes the MDisk from the storage pool. After
the MDisk is removed, it goes back to unmanaged. If there are no volumes in the storage pool
to which this MDisk is allocated, the command runs immediately without additional
confirmation. If there are volumes in the pool, you are prompted to confirm the action, as
shown in Figure 6-56.
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool. When the action completes, the MDisk is removed from the storage pool
and returns to unmanaged. When an MDisk is removed from a storage pool, extents in use are
migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient
extents exist in that tier, extents from the other tier are used.
Ensure that you have enough available capacity remaining in the storage pool to allocate the
data being migrated from the removed MDisk or else the command fails.
Important: The MDisk being removed must remain accessible to the system while all data
is copied to other MDisks in the same storage pool. If the MDisk is unmapped before the
migration finishes, all volumes in the storage pool go offline and remain in this state until
the removed MDisk is connected again.
234 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6.3 Working with internal drives
The SVC system provides an Internal Storage pane for managing all internal drives. To
access the Internal Storage pane, browse to Pools → Internal Storage, as shown in
Figure 6-57.
The pane gives an overview of the internal drives in the SVC system. Selecting All Internal in
the drive class filter displays all drives that are managed in the system, including all I/O
groups and expansion enclosures. Selecting the class of the drives on the left side of the
pane filters the list and display only the drives of the selected class.
You can find information regarding the capacity allocation of each drive class in the upper
right corner, as shown in Figure 6-58:
Total Capacity shows the overall capacity of the selected drive class.
MDisk Capacity shows the storage capacity of the selected drive class that is assigned to
MDisks.
Spare Capacity shows the storage capacity of the selected drive class that is used for
spare drives.
If All Internal is selected under the drive class filter, the values shown refer to the entire
internal storage.
The percentage bar indicates how much of the total capacity is allocated to MDisks and spare
drives, with MDisk capacity being represented by dark blue and spare capacity by light blue.
The actions available depend on the status of the drive or drives selected. Some actions can
only be run for individual drives.
Fix error
This action is only available if the drive selected is in an error condition. Selecting Fix Error
starts the Directed Maintenance Procedure (DMP) for the defective drive. For more
information about DMPs, see Chapter 13, “RAS, monitoring, and troubleshooting” on
page 689.
236 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Take offline
Selecting Take Offline allows the user to take a drive offline. Select this action only if there is
a problem with the drive and a spare drive is available. When selected you are prompted to
confirm the action, as shown in Figure 6-60.
If a spare drive is available and the drive is taken offline, the MDisk of which the failed drive is
a member remains Online and the spare is automatically reassigned. If no spare drive is
available and the drive is taken offline, the array of which the failed drive is a member gets
Degraded. Consequently, the storage pool to which the MDisk belongs gets Degraded as well,
as shown in Figure 6-61.
Figure 6-61 Degraded Pool and MDisk in case there is no more spare in the array
Figure 6-62 Taking a drive offline fails in case there is no spare in the array
Losing another drive in the MDisk results in data loss. Figure 6-63 shows the error in this
case.
Figure 6-63 Taking a drive offline fails if there is a risk of losing data
238 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A drive that is taken offline is considered Failed, as shown in Figure 6-64.
Mark as
Selecting Mark as allows you to change the usage assigned to the drive. The following use
options are available as shown in Figure 6-65:
Unused: The drive is not in use and cannot be used as a spare.
Candidate: The drive is available to be used in an MDisk.
Spare: The drive can be used as a hot spare, if required.
Identify
Selecting Identify turns on the LED light so you can easily identify a drive that must be
replaced or that you want to troubleshoot. Selecting this action opens a dialog box like the
one shown in Figure 6-67.
Upgrade
Selecting Upgrade allows the user to update the drive firmware as shown in Figure 6-68. You
can choose to update individual drives or all the drives that have available updates.
For information about updating drive firmware, see Chapter 13, “RAS, monitoring, and
troubleshooting” on page 689.
240 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-69 shows the list of volumes dependent on a set of three drives that belong to the
same MDisk. This configuration means that all listed volumes will go offline if all selected
drives go offline. If only one drive goes offline, then there is no volume dependency.
Note: A lack of dependent volumes does not imply that there are no volumes using the
drive. Volume dependency actually shows the list of volumes that would become
unavailable if the drive alone or the group of selected drive become unavailable.
Checking Show Details on the left corner of the window shows more details, including vendor
ID, product ID, and part number. You can also display drive slot details by selecting Drive
Slot.
External storage controllers with both types of attachment can be managed through the
External Storage pane. To access the External Storage pane, browse to Pools → External
Storage, as shown in Figure 6-71.
242 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The pane lists the external controllers that are connected to the SVC system and all the
external MDisks detected by the system. The MDisks are organized by the external storage
system that presents them. You can toggle the sign to the left of the controller icon to either
show or hide the MDisks associated with the controller.
If you have configured logical unit names on your external storage systems, it is not possible
for the system to determine this name because it is local to the external storage system.
However, you can use the external storage system WWNNs and the LU number to identify
each device.
If the external controller is not detected, ensure that the SVC is cabled and zoned into the
same storage area network (SAN) as the external storage system. If you are using Fibre
Channel, connect the Fibre Channel cables to the Fibre Channel ports of the canisters in your
system, and then to the Fibre Channel network. If you are using Fibre Channel over Ethernet,
connect Ethernet cables to the 10 Gbps Ethernet ports.
Attention: If the external controller is a Storwize system, the SVC must be configured at
the replication layer and the external controller must be configured at the storage layer.
The default layer for a Storwize system is storage. Make sure that the layers are correct
before zoning the two systems together. Changing the system layer is not available in the
GUI. You need to use the command-line interface (CLI).
Ensure that the layer of both systems is correct by entering the following command:
svcinfo lssystem
If needed, change the layer of the SVC to replication by entering the following command:
chsystem -layer replication
If needed, change the layer of the Storwize controller to storage by entering the following
command:
chsystem -layer storage
For more information about layers and how to change them, see Chapter 11, “Advanced
Copy Services” on page 461.
Figure 6-72 Fully redundant iSCSI connection between a Storwize system and SVC
For an example of how to cable the IBM Spectrum Accelerate to the SVC, see IBM
Knowledge Center:
https://ibm.biz/BdjS9u
For an example of how to cable the Dell EqualLogic to the SVC, see IBM Knowledge
Center at:
https://ibm.biz/BdjS9C
244 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The ports used for iSCSI attachment are enabled for external storage connections. By
default, Ethernet ports are disabled for external storage connections. You can verify the
setting of your Ethernet ports by navigating to Settings → Network and selecting
Ethernet Ports, as shown in Figure 6-73.
To enable the port for external storage connections, select the port, click Actions and select
Modify Storage Ports, as shown in Figure 6-74.
Set the port as Enabled for either IPv4 or IPv6, depending on the protocol version configured
for the connection, as shown in Figure 6-75.
Attention: Unlike Fibre Channel connections, iSCSI connections require the SVC to be
configured at the replication layer for every type of external controller. However, as with
Fibre Channel, if the external controller is a Storwize system, the controller must be
configured at the storage layer. The default layer for a Storwize system is storage.
If the SVC is not configured at the replication layer when Add External iSCSI Storage is
selected, you are prompted to do so, as shown in Figure 6-77 on page 247.
If the Storwize controller is not configured at the storage layer, this must be changed by
using the CLI.
Ensure that the layer of the Storwize controller is correct by entering the following
command:
svcinfo lssystem
If needed, change the layer of the Storwize controller to storage by entering the following
command:
chsystem -layer storage
For more information about layers and how to change them see Chapter 11, “Advanced
Copy Services” on page 461.
246 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-77 Converting the system layer to replication to add iSCSI external storage
Select Convert the system to the replication layer and click Next.
Select the type of external storage, as shown in Figure 6-78. For this example, the IBM
Storwize type is chosen. Click Next.
248 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The fields available vary depending on the configuration of your system and external
controller type. However, the meaning of each field is always kept. The following fields can
also be available:
Site: Enter the site associated with the external storage system. This field is shown only
for configurations by using HyperSwap.
User name: Enter the user name associated with this connection. If the target storage
system uses CHAP to authenticate connections, you must enter a user name. If you
specify a user name, you must specify a CHAP secret. This field is not required if you do
not use CHAP. This field is shown only for IBM Spectrum Accelerate and Dell EqualLogic
controllers.
Click Finish. The system attempts to discover the target ports and establish iSCSI sessions
between source and target. If the attempt is successful, the controller is added. Otherwise,
the action fails.
To select any action, right-click the controller, as shown in Figure 6-80. Alternatively, select
the controller and click Actions.
Discover storage
When you create or remove LUs on an external storage system, the change is not always
automatically detected. If that is the case select Discover Storage for the system to rescan
the Fibre Channel or iSCSI network. The rescan process discovers any new MDisks that were
added to the system and rebalances MDisk access across the available ports. It also detects
any loss of availability of the controller ports.
Naming rules: When you choose a name for a controller, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume named ABC and an MDisk called ABC, but you cannot have two volumes called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Modify site
This action is available only for systems that use HyperSwap. Selecting Modify Site allows
the user to modify the site with which the external controller is associated, as shown in
Figure 6-82.
250 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7
Chapter 7. Volumes
This chapter describes how to create and provision volumes for IBM Spectrum Virtualize
systems. In this case, a volume is a logical disk provisioned out of a storage pool and is
recognized by a host with a unique identifier (UID) field and a parameter list.
The first part of this chapter provides a brief overview of IBM Spectrum Virtualize volumes,
the classes of volumes available, and the topologies that they are associated with. It also
provides an overview of the advanced customization available.
The second part describes how to create volumes using the GUI and shows you how to map
these volumes to defined hosts.
The third part provides an introduction to the new volume manipulation commands, which are
designed to facilitate the creation and administration of volumes used for the IBM HyperSwap
and Enhanced Stretched Cluster topologies.
Note: A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
Redundant Arrays of Independent Disks (RAIDs) from internal storage, or external physical
disks that are presented as a single logical disk on the SAN. Each MDisk is divided into
several extents, which are numbered, from 0, sequentially from the start to the end of the
MDisk. The extent size is a property of the storage pools that the MDisks are added to.
Volumes have two major modes: Managed mode and image mode. Managed mode volumes
have two policies: The sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.
252 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The type attribute of a volume defines the allocation of extents that make up the volume copy:
A striped volume contains a volume copy that has one extent allocated in turn from each
MDisk that is in the storage pool. This is the default option. However, you can also supply
a list of MDisks to use as the stripe set as shown in Figure 7-1.
Attention: By default, striped volume copies are striped across all MDisks in the
storage pool. If some of the MDisks are smaller than others, the extents on the smaller
MDisks are used up before the larger MDisks run out of extents. Manually specifying
the stripe set in this case might result in the volume copy not being created.
If you are unsure if sufficient free space is available to create a striped volume copy,
select one of the following options:
Check the free space on each MDisk in the storage pool by using the lsfreeextents
command.
Let the system automatically create the volume copy by not supplying a specific
stripe set.
A sequential volume contains a volume copy that has extents allocated sequentially on
one MDisk.
Image-mode volumes are a special type of volume that has a direct relationship with one
MDisk.
An image mode MDisk is mapped to one, and only one, image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.
An image mode MDisk is associated with exactly one volume. If the (image mode) MDisk is
not a multiple of the MDisk Group’s extent size, the last extent is partial (not filled). An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any IBM Spectrum Virtualize system metadata extents assigned to it.
Managed or image mode MDisks are always members of a storage pool.
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the IBM Spectrum Virtualize copy services functions can be applied to image
mode disks. See Figure 7-2.
254 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-3 shows this mapping. It also shows a volume that consists of several extents that
are shown as V0 - V7. Each of these extents is mapped to an extent on one of the MDisks:
A, B, or C. The mapping table stores the details of this indirection.
Several of the MDisk extents are unused. No volume extent maps to them. These unused
extents are available for use in creating volumes, migration, expansion, and so on.
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm:
If the set of MDisks from which to allocate extents contains more than one MDisk, extents
are allocated from MDisks in a round-robin fashion.
If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin
moves to the next MDisk in the set that has a free extent.
When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the striping effect places the first extent
for many volumes on the same MDisk. This effect is inherent in a round-robin algorithm.
Placing the first extent of several volumes on the same MDisk can lead to poor performance
for workloads that place a large I/O load on the first extent of each volume, or that create
multiple sequential streams.
Note: Having cache-disabled volumes makes it possible to use the native copy services
in the underlying RAID array controller for MDisks (LUNs) that are used as IBM
Spectrum Virtualize image mode volumes. Consult with IBM Support before turning off
the cache for volumes in your production environment to avoid any performance
degradation.
The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships.
It is serviced by an I/O Group, and has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.
This feature provides a point-in-time copy function that is achieved by “splitting” a copy from
the volume. However, the mirrored volume feature does not address other forms of mirroring
that are based on remote copy, which is sometimes called IBM HyperSwap, that mirrors
volumes across I/O Groups or clustered systems. It is also not intended to manage mirroring
or remote copy functions in back-end controllers.
256 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-4 provides an overview of volume mirroring.
A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default synchronization rate or at a rate that is defined when the volume
is created or modified. The synchronization status for mirrored volumes is recorded on the
quorum disk.
If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel. The volume comes online when both operations are complete with the copies in
sync.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a no synchronization option can be selected that
declares the copies as synchronized even when they are not.
To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 kibibyte (KiB) grains that were written to since the synchronization was lost are copied.
This approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.
Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Figure 7-5 Data flow for write I/O processing in a mirrored volume
As shown in Figure 7-5, all the writes are sent by the host to the preferred node for each
volume (1). Then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destages the written data to the two volume copies (4).
With version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is only written once, and is then directly destaged
from the controller to the locally attached disk system.
258 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-6 shows the data flow in a stretched environment.
Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.
Important: Mirrored volumes can be taken offline if no quorum disk is available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, which provides 1 MiB
of bitmap space supporting 2 TiB of mirrored volumes. The default allocation of bitmap space
is 20 MiB, which supports 40 TiB of mirrored volumes. If all 512 MiB of variable bitmap space
is allocated to mirrored volumes, 1 PiB of mirrored volumes can be supported.