0% found this document useful (0 votes)
42 views942 pages

Implementing IBM FlashSystem

Uploaded by

Lam Cuong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views942 pages

Implementing IBM FlashSystem

Uploaded by

Lam Cuong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 942

Front cover

Implementing IBM FlashSystem


9200, 9100, 7200, and 5100 Systems
with IBM Spectrum Virtualize V8.3.1

Jack Armstrong
Tiago Bastos
Pawel Brodacki
Markus Döllinger
Jon Herd
Sergey Kubin
Carsten Larsen
Hartmut Lonzer
Jon Tate

Redbooks
IBM Redbooks

Implementing IBM FlashSystem 9200, 9100, 7200, and


5100 Systems with IBM Spectrum Virtualize V8.3.1

September 2020

SG24-8466-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xv.

First Edition (September 2020)

This edition applies to IBM Spectrum Virtualize V8.3.1 and the associated hardware and software that is
detailed within. The screen captures included within this book might differ from the generally available (GA)
version because parts of this book were written with pre-GA code. On 11 February 2020, IBM announced that
it was simplifying its portfolio. This book was written by using previous models of the product line before the
simplification; however, most of the general principles apply. If you are in any doubt as to their applicability,
contact your local IBM representative.

© Copyright International Business Machines Corporation 2020. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

Chapter 1. Introduction and system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Benefits of using IBM Spectrum Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Latest changes and enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 IBM Spectrum Virtualize family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 IBM FlashSystem simplified terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 IBM FlashSystem 5100 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.1 IBM FlashSystem 5100 Model 4H4 supported hardware . . . . . . . . . . . . . . . . . . . 12
1.5.2 IBM FlashSystem 5100 Model 4F4 supported hardware . . . . . . . . . . . . . . . . . . . 12
1.5.3 Supported fabric protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.4 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.5 On board ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.6 IBM Spectrum Virtualize Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.7 Feature set comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.8 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.9 Storage migration and management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.10 Data migration by using the external virtualization feature . . . . . . . . . . . . . . . . . 17
1.5.11 External storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.12 Managing internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 Features for storage efficiency and data reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.1 Compression and deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6.2 Features for enhanced data security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 Copy services and HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7.1 HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.2 Features for application integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.8 Features for manageability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.9 IBM FlashSystem 7200 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.9.1 IBM FlashSystem 7200 hardware components . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.9.2 IBM FlashSystem 7200 Control Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9.3 IBM FlashSystem 7200 Model 824 system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.9.4 IBM FlashSystem 7200 Expansion Enclosures 12G and 24G . . . . . . . . . . . . . . . 37
1.9.5 Dense Expansion Enclosure 92G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.9.6 IBM FlashSystem 7200 Utility Models U7C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.10 IBM FlashSystem 9100 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.10.1 IBM FlashSystem 9100 hardware components . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.10.2 IBM FlashSystem 9100 Control Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.10.3 IBM FlashSystem 9100 Model 9110 Control Enclosure AF7 . . . . . . . . . . . . . . . 44
1.10.4 IBM FlashSystem 9100 Model 9150 Control Enclosure AF8 . . . . . . . . . . . . . . . 44
1.10.5 IBM FlashSystem 9100 Model 9150 Expansion Enclosure Models AFF/A9F . . 45
1.10.6 IBM FlashSystem 9100 Utility Models UF7 and UF8 . . . . . . . . . . . . . . . . . . . . . 45

© Copyright IBM Corp. 2020. All rights reserved. iii


1.11 IBM FlashSystem 9200 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.11.1 IBM FlashSystem 9200 hardware components . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.11.2 IBM FlashSystem 9200 Control Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.11.3 IBM FlashSystem 9200 Control Enclosure Model AG8 . . . . . . . . . . . . . . . . . . . 49
1.11.4 IBM FlashSystem 9200 Utility Model UG8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.11.5 IBM FlashSystem 9000 Expansion Enclosure Models AFF and A9F . . . . . . . . . 51
1.12 IBM FlashSystem 9200R Rack Solution overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
1.12.1 Minimum IBM FlashSystem 9200R Rack Solution configuration in the rack . . . 57
1.12.2 Maximum configuration of an IBM FlashSystem 9200R Rack Solution with Model
A9F Expansion Enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.12.3 Maximum configuration of a IBM FlashSystem 9200R Rack Solution with Model
AFF Expansion Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.12.4 FC cabling and clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.12.5 IBM FlashSystem 9200R Rack Solution FC configuration with 8977-T32 Cisco
switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.12.6 IBM FlashSystem 9200R Rack Solution FC configuration with 8960-F34 Brocade
switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
1.12.7 IBM FlashSystem 9200R Rack Solution SAS Expansion Enclosures cabling . . 65
1.13 IBM FlashCore Module drives, NVMe SSDs, and SCM drives . . . . . . . . . . . . . . . . . . 67
1.13.1 Support for host platforms, adapters, and switches . . . . . . . . . . . . . . . . . . . . . . 70
1.14 Software features and licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
1.14.1 IBM Spectrum Virtualize for IBM FlashSystem 9100, 9200, and 7200 systems . 71
1.14.2 IBM Multi-Cloud starter software for IBM FlashSystem 9100, 9200, and 7200
systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.14.3 IBM FlashSystem 9100 Multi-Cloud solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.14.4 IBM FlashSystem 9200 and 7200 Multi-Cloud solutions. . . . . . . . . . . . . . . . . . . 73
1.14.5 IBM FlashSystem 9100, 9200, and 7200 high availability. . . . . . . . . . . . . . . . . . 74
1.15 Data protection on IBM FlashSystem 9100, 9200, and 7200 systems . . . . . . . . . . . . 74
1.15.1 Variable Stripe RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
1.15.2 DRAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Chapter 2. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.2 Planning for availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.3 Physical installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.4 Planning for system management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.5 Connectivity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.6 Fibre Channel SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.6.1 Physical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.6.2 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.6.3 N_Port ID Virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.6.4 Inter-node zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.6.5 Back-end storage zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.6.6 Host zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.6.7 Zoning considerations for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . 85
2.6.8 Port designation recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.7 IP SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.7.1 iSCSI and iSER protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.7.2 Priority flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.7.3 RDMA clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.7.4 iSCSI back-end storage attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.7.5 IP network host attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.7.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

iv Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2.7.7 Firewall planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.8 Back-end storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.9 Internal storage configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.10 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
2.10.1 The storage pool and cache relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.11 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.11.1 Planning for image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.11.2 Planning for fully allocated volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.11.3 Planning for thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.11.4 Planning for compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.11.5 Planning for deduplicated volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12 Host attachment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.1 Queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.12.2 Microsoft Offloaded Data Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.12.3 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.12.4 Planning for large deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.12.5 Planning for SCSI UNMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.13 Planning copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13.1 FlashCopy guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.13.2 Planning for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.14 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
2.15 Performance monitoring with IBM Storage Insights . . . . . . . . . . . . . . . . . . . . . . . . . 101
2.16 Configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Chapter 3. Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.2 System initialization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2.1 System initialization process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.3 System setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.3.1 System setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.4 Base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.4.1 Configuring Remote Direct Memory Access clustering. . . . . . . . . . . . . . . . . . . . 121
3.4.2 Adding an enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.4.3 Changing the system topology to HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.4.4 Configuring quorum disks or applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.4.5 Configuring the local Fibre Channel port masking . . . . . . . . . . . . . . . . . . . . . . . 131
3.4.6 Automatic configuration for IBM SAN Volume Controller back-end storage . . . . 133
3.5 Configuring management access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.5.1 Configuring secure communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.5.2 Configuring user authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Chapter 4. IBM Spectrum Virtualize GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147


4.1 Performing operations by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.1.1 Accessing the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.2 GUI introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.1 Task menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.2 Suggested tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.2.3 Notification icons and help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.3 System - Overview window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.3.1 Content-based organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.4 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.4.2 Easy Tier Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Contents v
4.4.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.4.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.4.5 Background Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.5 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.6 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.7 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.8 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.9 Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.9.1 Ownership groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.9.2 Users by groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.9.3 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4.10 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.10.1 Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4.10.2 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.10.3 Using the management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.10.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
4.10.5 System menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
4.10.6 Support menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
4.10.7 GUI Preferences menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
4.11 Additional frequent tasks in the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
4.11.1 Renaming components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
4.11.2 Working with enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
4.11.3 Restarting the GUI service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

Chapter 5. Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221


5.1 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.1.1 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.1.2 Managed disks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.1.3 Actions on storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.1.4 Child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.1.5 Encrypted storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.2 Working with internal drives and arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.2.1 Working with drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.2.2 RAID and distributed redundant array of independent disks. . . . . . . . . . . . . . . . 248
5.2.3 Creating arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5.2.4 Actions on arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.3 Working with external controllers and MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.3.1 External storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.3.2 Actions for external storage controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5.3.3 Working with external MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.3.4 Actions for external MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Chapter 6. Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277


6.1 Introduction to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
6.2 Volume characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
6.2.1 Volume type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6.2.2 Managed mode and image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6.2.3 Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
6.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
6.2.5 Volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
6.2.6 I/O operations data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.2.7 Storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
6.2.8 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

vi Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6.2.9 Cache mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.2.10 I/O throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.11 Volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.2.12 Secure data deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.3 Virtual volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.4 Volumes in multi-site topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
6.5 Operations on volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.5.1 Creating volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.5.2 Creating custom volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.5.3 HyperSwap volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.5.4 I/O throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.5.5 Volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.5.6 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.5.7 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6.5.8 Mapping a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.5.9 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.6 Volume operations by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.6.1 Displaying volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.6.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
6.6.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6.6.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.6.5 Adding a volume copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
6.6.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
6.6.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
6.6.8 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
6.6.9 Volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
6.6.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
6.6.11 HyperSwap volume modification with CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
6.6.12 Mapping a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
6.6.13 Listing volumes that are mapped to the host . . . . . . . . . . . . . . . . . . . . . . . . . . 359
6.6.14 Listing hosts that are mapped to the volume . . . . . . . . . . . . . . . . . . . . . . . . . . 360
6.6.15 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
6.6.16 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
6.6.17 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 362
6.6.18 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
6.6.19 Listing volumes that use MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
6.6.20 Listing MDisks that are used by the volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
6.6.21 Listing volumes that are defined in the storage pool. . . . . . . . . . . . . . . . . . . . . 364
6.6.22 Listing storage pools in which a volume has its extents . . . . . . . . . . . . . . . . . . 364
6.6.23 Tracing a volume from a host back to its physical disks . . . . . . . . . . . . . . . . . . 366

Chapter 7. Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369


7.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.2 Host clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
7.3 NVMe over Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
7.4 N_Port ID Virtualization support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.4.1 NPIV prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
7.4.2 Enabling NPIV on a new system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
7.4.3 Enabling NPIV on an existing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
7.5 Hosts operations by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
7.5.1 Creating hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
7.5.2 About host clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
7.5.3 Advanced host administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Contents vii
7.5.4 Hosts and host clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
7.5.5 Ports by Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
7.5.6 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
7.5.7 Volumes by Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
7.5.8 Volumes by Host Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
7.6 Performing hosts operations by using the command-line interface. . . . . . . . . . . . . . . 430
7.6.1 Creating a host by using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
7.6.2 Advanced host administration by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . 433
7.6.3 Adding and deleting a host port by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . 436
7.6.4 Host cluster operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

Chapter 8. Storage migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443


8.1 Storage migration overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
8.1.1 Interoperability and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
8.1.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
8.2 Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446

Chapter 9. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 463


9.1 IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
9.1.1 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
9.1.2 Implementing and tuning Easy Tier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
9.1.3 Monitoring Easy Tier activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
9.2 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
9.2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.2.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.3 UNMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
9.3.1 The SCSI UNMAP command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
9.3.2 Back-end SCSI UNMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
9.3.3 Host SCSI UNMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
9.3.4 Offloading I/O throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
9.4 Data Reduction Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
9.4.1 Introduction to DRP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
9.4.2 DRP benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
9.4.3 Planning for DRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
9.4.4 Implementing DRP with compression and deduplication . . . . . . . . . . . . . . . . . . 491
9.5 Saving estimations for compression and deduplication . . . . . . . . . . . . . . . . . . . . . . . 498
9.5.1 Evaluating compression savings by using IBM Comprestimator . . . . . . . . . . . . 498
9.5.2 Evaluating compression and deduplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
9.6 Overprovisioning and data reduction on external storage. . . . . . . . . . . . . . . . . . . . . . 500

Chapter 10. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505


10.1 IBM FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
10.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
10.1.2 FlashCopy principles and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
10.1.3 FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
10.1.4 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
10.1.5 Crash-consistent copy and hosts considerations . . . . . . . . . . . . . . . . . . . . . . . 510
10.1.6 Grains and bitmap: I/O indirection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
10.1.7 Interaction with cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
10.1.8 Background Copy Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
10.1.9 Incremental FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
10.1.10 Starting FlashCopy mappings and consistency groups . . . . . . . . . . . . . . . . . 520
10.1.11 Multiple target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
10.1.12 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

viii Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.1.13 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
10.1.14 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
10.1.15 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
10.1.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
10.1.17 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
10.1.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
10.1.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . 534
10.1.20 FlashCopy attributes and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
10.2 Managing FlashCopy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
10.2.1 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
10.2.2 FlashCopy window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
10.2.3 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.2.4 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
10.2.5 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
10.2.6 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
10.2.7 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
10.2.8 Creating FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
10.2.9 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
10.2.10 Moving FlashCopy mappings across consistency groups . . . . . . . . . . . . . . . 562
10.2.11 Removing FlashCopy mappings from consistency groups . . . . . . . . . . . . . . . 564
10.2.12 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
10.2.13 Renaming FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
10.2.14 Deleting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
10.2.15 Deleting a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
10.2.16 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
10.2.17 Stopping FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
10.2.18 Memory allocation for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
10.3 Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
10.3.1 Considerations for using Transparent Cloud Tiering. . . . . . . . . . . . . . . . . . . . . 576
10.3.2 Transparent Cloud Tiering as a backup solution and for data migration. . . . . . 577
10.3.3 Restoring data by using Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . 577
10.3.4 Transparent Cloud Tiering restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.4 Implementing Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.4.1 Domain name server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.4.2 Enabling Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
10.4.3 Creating cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.4.4 Managing cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
10.4.5 Restoring cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
10.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
10.6 Remote Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
10.6.1 IBM FlashSystem 9100 system layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
10.6.2 Multiple IBM Spectrum Virtualize systems replication. . . . . . . . . . . . . . . . . . . . 593
10.6.3 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
10.6.4 Remote Copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
10.6.5 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
10.6.6 Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
10.6.7 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
10.6.8 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
10.6.9 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
10.6.10 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
10.6.11 Asynchronous Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
10.6.12 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
10.6.13 Using GMCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606

Contents ix
10.6.14 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
10.6.15 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.6.16 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.6.17 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
10.6.18 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
10.6.19 IBM Spectrum Virtualize HyperSwap topology . . . . . . . . . . . . . . . . . . . . . . . . 610
10.6.20 Consistency Protection for Global Mirror and Metro Mirror. . . . . . . . . . . . . . . 611
10.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 612
10.6.22 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.6.23 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.7 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
10.7.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
10.7.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
10.7.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
10.7.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
10.7.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 623
10.7.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 624
10.7.7 Changing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 624
10.7.8 Changing a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 624
10.7.9 Starting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 624
10.7.10 Stopping a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 625
10.7.11 Starting a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 625
10.7.12 Stopping a Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . 625
10.7.13 Deleting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 626
10.7.14 Deleting a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 626
10.7.15 Reversing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . 626
10.7.16 Reversing a Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . 627
10.8 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
10.8.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
10.8.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
10.8.3 IP partnership and data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
10.8.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
10.8.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
10.8.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
10.8.7 Remote Copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
10.8.8 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
10.9 Managing Remote Copy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
10.9.1 Creating a Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
10.9.2 Creating Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
10.9.3 Creating a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
10.9.4 Renaming Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.9.5 Renaming a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 668
10.9.6 Moving stand-alone Remote Copy relationships to consistency group. . . . . . . 670
10.9.7 Removing Remote Copy relationships from a consistency group. . . . . . . . . . . 671
10.9.8 Starting Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.9.9 Starting a Remote Copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
10.9.10 Switching a relationship’s copy direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.9.11 Switching a consistency group direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
10.9.12 Stopping Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
10.9.13 Stopping a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
10.9.14 Deleting Remote Copy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
10.9.15 Deleting a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
10.10 Remote Copy memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685

x Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.11 Troubleshooting Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
10.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
10.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688

Chapter 11. Ownership groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689


11.1 Ownership groups principles of operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
11.2 Implementing ownership groups on a new system . . . . . . . . . . . . . . . . . . . . . . . . . . 692
11.2.1 Creating an ownership group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
11.2.2 Assigning users to an ownership group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
11.2.3 Creating ownership group resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
11.2.4 Listing ownership group resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
11.2.5 Actions on ownership groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
11.3 Migrating existing objects to ownership groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697

Chapter 12. Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701


12.1 General types of encryption across IBM Spectrum Virtualize . . . . . . . . . . . . . . . . . . 702
12.1.1 Externally virtualized storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
12.1.2 Serial-attached SCSI internal storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
12.1.3 Non-Volatile Memory Express internal storage. . . . . . . . . . . . . . . . . . . . . . . . . 702
12.2 Planning for encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
12.3 Defining encryption of data at-rest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
12.3.1 Encryption methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
12.3.2 Encrypted data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
12.3.3 Encryption keys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
12.3.4 Encryption licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
12.4 Activating encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
12.4.1 Obtaining an encryption license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
12.4.2 Starting the activation process during the initial system setup . . . . . . . . . . . . . 709
12.4.3 Starting the activation process on a running system. . . . . . . . . . . . . . . . . . . . . 712
12.4.4 Activating the license automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
12.4.5 Activating the license manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
12.5 Enabling encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
12.5.1 Starting the Enable Encryption wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
12.5.2 Enabling encryption by using USB flash drives. . . . . . . . . . . . . . . . . . . . . . . . . 721
12.5.3 Enabling encryption by using key servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
12.5.4 Enabling encryption by using both providers . . . . . . . . . . . . . . . . . . . . . . . . . . 739
12.6 Configuring more providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
12.6.1 Adding key servers as a second provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
12.6.2 Adding USB flash drives as a second provider . . . . . . . . . . . . . . . . . . . . . . . . . 747
12.7 Migrating between providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
12.7.1 Changing from a USB flash drive provider to an encryption key server . . . . . . 749
12.7.2 Changing from an encryption key server to a USB flash drive provider . . . . . . 749
12.7.3 Migrating between different key server types . . . . . . . . . . . . . . . . . . . . . . . . . . 750
12.8 Recovering from a provider loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
12.9 Using encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
12.9.1 Encrypted pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
12.9.2 Encrypted child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
12.9.3 Encrypted arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
12.9.4 Encrypted MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
12.9.5 Encrypted volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
12.9.6 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
12.10 Rekeying an encryption-enabled system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
12.10.1 Rekeying by using a key server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762

Contents xi
12.10.2 Rekeying by using USB flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
12.11 Disabling encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and
troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
13.1 Reliability, availability, and serviceability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
13.1.1 Node canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
13.1.2 Expansion canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
13.1.3 Dense Drawer Enclosures LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
13.1.4 Enclosure SAS cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
13.1.5 IBM FlashCore Module drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
13.1.6 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
13.2 Shutting down the IBM FlashSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
13.2.1 Shutting down and powering on a complete infrastructure . . . . . . . . . . . . . . . . 781
13.3 Removing or adding a node from or to the system . . . . . . . . . . . . . . . . . . . . . . . . . . 782
13.4 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
13.4.1 Backing up by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
13.4.2 Saving the backup by using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
13.5 Software update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
13.5.1 Precautions before the update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
13.5.2 IBM FlashSystem update test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790
13.5.3 Updating your IBM FlashSystem to Version 8.3.1 . . . . . . . . . . . . . . . . . . . . . . 791
13.5.4 Updating the IBM FlashSystem drive code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
13.5.5 Manually updating the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
13.6 Health checker feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
13.7 Troubleshooting and fix procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
13.7.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
13.7.2 Running a fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
13.7.3 Event log details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
13.8 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
13.8.1 Email notifications and the Call Home function. . . . . . . . . . . . . . . . . . . . . . . . . 813
13.8.2 Remote Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
13.8.3 SNMP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
13.8.4 Syslog notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
13.9 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
13.10 Collecting support information by using the GUI, CLI, and USB . . . . . . . . . . . . . . . 831
13.10.1 Collecting information by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
13.10.2 Collecting logs by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
13.10.3 Collecting logs by using a USB flash drive . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
13.10.4 Uploading files to the IBM Support Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
13.11 Service Assistant Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
13.12 IBM Storage Insights monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
13.12.1 Capacity monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
13.12.2 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
13.12.3 Logging support tickets by using IBM Storage Insights . . . . . . . . . . . . . . . . . 846
13.12.4 Managing existing support tickets by using IBM Storage Insights and uploading
logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853

Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 855


IBM Storage System performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
IBM Spectrum Virtualize performance perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858

xii Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Real-time performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
Performance data collection and IBM Spectrum Control . . . . . . . . . . . . . . . . . . . . . . . 868

Appendix B. Command-line interface setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871


CLI setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
Basic setup on a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
Basic setup on an UNIX or Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881

Appendix C. Terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885


Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913

Contents xiii
xiv Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Notices

This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS”


WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.

The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.

© Copyright IBM Corp. 2020. All rights reserved. xv


Trademarks
IBM, the IBM logo, and [Link] are trademarks or registered trademarks of International Business Machines
Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright
and trademark information” at [Link]

The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Cloud® PowerHA®
Db2® IBM FlashCore® PowerVM®
DS8000® IBM FlashSystem® Real-time Compression Appliance®
Easy Tier® IBM Research® Redbooks®
FICON® IBM Security™ Redbooks (logo) ®
FlashCopy® IBM Spectrum® Storwize®
HyperSwap® IBM Spectrum Storage™ System Storage™
IBM® MicroLatency® XIV®

The following terms are trademarks of other companies:

Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.

Other company, product, or service names may be trademarks or service marks of others.

xvi Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Preface

Continuing its commitment to developing and delivering industry-leading storage


technologies, IBM® introduces the IBM FlashSystem® solution that is powered by
IBM Spectrum® Virtualize V8.3.1. This innovative storage offering delivers essential storage
efficiency technologies and exceptional ease of use and performance, all integrated into a
compact, modular design that is offered at a competitive, midrange price.

The solution incorporates some of the top IBM technologies that are typically found only in
enterprise-class storage systems, which raises the standard for storage efficiency in
midrange disk systems. This cutting-edge storage system extends the comprehensive
storage portfolio from IBM and can help change the way organizations address the ongoing
information explosion.

This IBM Redbooks® publication introduces the features and functions of an IBM Spectrum
Virtualize V8.3.1 system through several examples. This book is aimed at pre-sales and
post-sales technical support and marketing and storage administrators. It helps you
understand the architecture, how to implement it, and how to take advantage of its
industry-leading functions and features.

Applicability: This edition applies to IBM Spectrum Virtualize V8.3.1 and the associated
hardware and software that is detailed within. The screen captures included within this
book might differ from the generally available (GA) version because parts of this book were
written with pre-GA code. On 11 February 2020, IBM announced that it was simplifying its
portfolio. This book was written by using previous models of the product line before the
simplification; however, most of the general principles apply. If you are in any doubt as to
their applicability, contact your local IBM representative.

IBM Knowledge Center: In this book we provide links to Knowledge Center and a
description of the relevant section that provides more information. Our starting point is the
IBM FlashSystem 9200 family page, and the reader may have to select the product that
applies to their environment.

Authors
This book was produced by a team of specialists from around the world working at IBM
Redbooks, San Jose Center.

Jack Armstrong is a Storage Support Specialist for IBM


Systems Group at Hursley, UK. He joined IBM as part of the
Apprenticeship Scheme in 2012 and has 7 years of experience
working with IBM Storage and providing support to thousands
of customers across Europe and beyond. He also works with
IBM Enhanced Technical Support Services to help clients to
expand and improve their storage environments.

© Copyright IBM Corp. 2020. All rights reserved. xvii


Tiago Bastos is a storage area network (SAN) and Storage
Disk specialist at IBM Brazil. He has over 20 years in the IT
arena, and is an IBM Certified Master IT Specialist. Certified for
IBM Storwize®, he works on storage as a service (SaaS)
implementation projects. His areas of expertise include
planning, configuring, and troubleshooting IBM DS8000®, IBM
FlashSystem, IBM SAN Volume Controller, and IBM XIV®;
lifecycle management; and copy services.

Pawel Brodacki is an Infrastructure Architect with 20 years of


experience in IT who works for IBM Poland since 2003. His
main focus for the last 5 years is on virtual infrastructure
architecture from storage to servers to software-defined
networks (SDNs). Before changing his profession to system
architecture, he was an IBM Certified IT Specialist working on
various infrastructure, virtualization, and disaster recovery (DR)
projects. His experience includes SAN, storage, highly
available (HA) systems, disaster recovery (DR) solutions, IBM
System x and IBM Power Systems servers, and several types
of operating systems (OSs) (Linux, IBM AIX®, and Microsoft
Windows). Pawel has obtained certifications from IBM, Red
Hat, and VMware. Pawel holds a master's degree in biophysics
from the University of Warsaw College of Inter-Faculty
Individual Studies in Mathematics and Natural Sciences.

Markus Döllinger is a Software Developer and Support


Engineer for IBM Germany. He joined IBM as a student in 2009
and has 8 years of experience as an IBM Storage specialist.
He works as an L3 support engineer and developer for the IBM
Spectrum Virtualize family of products. His responsibilities
include providing storage support for customers worldwide,
leading the design and implementation of support tools, and
developing IBM Spectrum Virtualize Software. He has an active
role in the pro-active support initiative to provide real-time
feedback to customers with regards to best practice
recommendations based their specific storage environment.
He holds a Bachelor of Science degree in Computer Science.

xviii Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Jon Herd is an IBM Storage Technical Advisor working for the
EMEA Storage Competence Center (ESCC) in Mainz,
Germany. He advises customers in the United Kingdom,
Ireland, and Sweden about a portfolio of IBM Storage products,
including IBM FlashSystem products. Jon has been with IBM
for more than 45 years, and has held various technical roles,
including Europe, the Middle East, and Africa (EMEA) level 2
support for mainframe servers and technical education
development. He has written many IBM Redbooks publications
on IBM FlashSystem products, and is an IBM Redbooks
Platinum level author. He holds IBM certifications in Product
Services at an expert level and Technical IT Specialist at an
experienced level. He is the chair of the UKI Professions Board
for Product Services. He is a certified Chartered Member of the
British Computer Society (MBCS-CITP) and a Certified
Member of the Institution of Engineering and Technology
(MIET).

Sergey Kubin is a subject matter expert (SME) for IBM


Storage and SAN technical support. He holds an Electronics
Engineer degree from Ural Federal University in Russia and
has more than 15 years of experience in IT. At IBM, he works
for IBM Technology Support Services, where he provides
support and guidance about IBM Spectrum Virtualize family
systems for customers in Europe, the Middle East, and Russia.
His expertise includes SAN, block-level, and file-level storage
systems and technologies. He is IBM Certified Specialist for
IBM FlashSystem Family Technical Solutions.

Carsten Larsen is an IBM Certified Senior IT Specialist


working for the Technical Services Support organization at IBM
Denmark, where he delivers consultancy services to IBM
clients within the storage arena. Carsten joined IBM in 2007
when he left HP, where he worked with storage arrays and
UNIX for 10 years. While working for IBM, Carsten obtained
several Brocade and NetApp certifications. Carsten is the
author of several IBM Redbooks publications.

Hartmut Lonzer is the IBM Storwize Territory Account


Manager for Germany (D), Austria (A), and Switzerland (CH)
(DACH). Before this position, he was OEM Alliance Manager
for Lenovo in IBM Germany. He works at the IBM Germany
headquarters in Ehningen.

His main focus is on the IBM FlashSystem Family and the SAN
Volume Controller. His experience with the SAN Volume
Controller and IBM FlashSystem products goes back to the
beginning of these products. Hartmut has been with IBM in
various technical and sales roles for 42 years.

Preface xix
Jon Tate is a Project Manager for IBM System Storage™ SAN
Solutions at the ITSO, San Jose Center. Before joining the
ITSO in 1999, he worked in the IBM Technical Support Center,
providing Level 2 and 3 support for IBM mainframe storage
products. Jon has 34 years of experience in storage software
and management, services, and support. He is an IBM
Certified IT Specialist, an IBM SAN Certified Specialist, and is
Project Management Professional (PMP) certified. He is also
the UK Chairman of the Storage Networking Industry
Association (SNIA).

Thanks to the following for their contributions that made this book possible:

Alex Ainscow, Djihed Afifi, Christopher Bulmer, Debbie Butts, Philip Clark, Carlos Fuente,
Sally Neate, Evelyn Perez, Suri Polisetti, Matt Smith, Andy Walsh
IBM Hursley, UK

James Whitaker
IBM Manchester, UK

Karen Brown, Mary Connell, Navin Manohar, Terry Niemeyer, Jim Olson, Andy Walls, Brent
Yardley
IBM US

Shrirang Bhagwat, Abhishek Jaiswal, Aakanksha Mathur, Sanjay Pathak, Sudharsan Vangal
IBM India

Jorge Enrique Escalante Ramonet


IBM Mexico

Special thanks to the Broadcom Inc. staff in San Jose, California for their support of this
residency in terms of equipment and support in many areas:

Sangam Racherla, Brian Steffler, Marcus Thordal


Broadcom Inc.

Now you can become a published author, too!


Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an IBM Redbooks residency project and help write a book
in your area of expertise, while honing your experience using leading-edge technologies. Your
efforts will help to increase product acceptance and customer satisfaction, as you expand
your network of technical contacts and relationships. Residencies run from two to six weeks
in length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
[Link]/redbooks/[Link]

xx Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
[Link]/redbooks
򐂰 Send your comments in an email to:
redbooks@[Link]
򐂰 Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


򐂰 Find us on Facebook:
[Link]
򐂰 Follow us on Twitter:
[Link]
򐂰 Look for us on LinkedIn:
[Link]
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
[Link]
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
[Link]

Preface xxi
xxii Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1

Chapter 1. Introduction and system


overview
This chapter defines the concept of storage virtualization and provides an overview of its
application in addressing the challenges of modern storage environment. It also contains an
overview of each of the products that make up the IBM FlashSystem family.

This chapter includes the following topics:


򐂰 1.1, “Benefits of using IBM Spectrum Virtualize” on page 2
򐂰 1.2, “Storage virtualization terminology” on page 4
򐂰 1.3, “Latest changes and enhancements” on page 7
򐂰 1.4, “IBM Spectrum Virtualize family” on page 9
򐂰 1.5, “IBM FlashSystem 5100 overview” on page 11
򐂰 1.6, “Features for storage efficiency and data reduction” on page 20
򐂰 1.7, “Copy services and HyperSwap” on page 25
򐂰 1.8, “Features for manageability” on page 30
򐂰 1.9, “IBM FlashSystem 7200 overview” on page 34
򐂰 1.10, “IBM FlashSystem 9100 overview” on page 41
򐂰 1.11, “IBM FlashSystem 9200 overview” on page 46
򐂰 1.12, “IBM FlashSystem 9200R Rack Solution overview” on page 54
򐂰 1.13, “IBM FlashCore Module drives, NVMe SSDs, and SCM drives” on page 67
򐂰 1.14, “Software features and licensing” on page 71
򐂰 1.15, “Data protection on IBM FlashSystem 9100, 9200, and 7200 systems” on page 74

© Copyright IBM Corp. 2020. All rights reserved. 1


1.1 Benefits of using IBM Spectrum Virtualize
The storage virtualization functions of IBM Spectrum Virtualize are a powerful tool in the
hands of storage administrators. However, for an organization to fully realize benefits of
storage virtualization, its implementation must be the result of a process that begins with
identifying the organization’s goals. For a storage virtualization project to be a success, the
organization must identify what it wants to achieve before it starts to think how to implement
the solution.

Today, organizations are searching for affordable and efficient ways to store, use, protect, and
manage their data. Additionally, a storage environment is required to have an easy to manage
interface and be sufficiently flexible to support a wide range of applications, servers, and
mobility requirements. Although business demands change quickly, some recurring client
concerns drive adoption of storage virtualization, including the following examples:
򐂰 Growing data center costs
򐂰 Inability of IT organizations to respond quickly to business demands
򐂰 Poor asset usage
򐂰 Poor availability and the resultant unsatisfactory (for the clients) or challenging (for the
providers) service levels
򐂰 Lack of skilled staff for storage administration

Storage virtualization is one of the foundations of building a flexible and reliable infrastructure
solution that enables companies to better align their technologies needs. Storage
virtualization enables an organization to achieve affordability and manageability by
implementing storage pools across several physically separate disk systems (which might be
from different vendors).

Storage can then be deployed from these pools, and migrated transparently between pools
without interruption to the attached host systems and their applications. Storage virtualization
provides a single set of tools for advanced functions, such as instant copy and remote
mirroring solutions, which enables faster and seamless deployment of storage regardless of
the underlying hardware.

Because the storage virtualization that is represented by IBM Spectrum Virtualize is a


software-enabled function, it offers more features that are typically not available on a regular
or “pure” storage subsystem, including (but not limited to) the following features:
򐂰 Data compression
򐂰 Software and hardware encryption
򐂰 IBM Easy Tier® for workload balancing
򐂰 Thin provisioning
򐂰 Mirroring and copy services
򐂰 Interface to cloud service providers (CSPs)

Figure 1-1 on page 3 shows these features.

2 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-1 IBM Spectrum Storage virtualization

The following reasons are the top reasons to choose a software-defined storage (SDS)
virtualization infrastructure and benefit from IBM Spectrum Virtualize:
򐂰 Lower license cost. Avoid purchasing licenses from multiple storage vendors for advanced
features (replication, tiering, snapshots, compression, and so on) of each external storage
subsystem. Manage them centrally from IBM Spectrum Virtualize.
򐂰 Feed more data onto physical disks. External storage area network (SAN) disk arrays
have physical boundaries. Although one subsystem might run out of space, another has
free capacity. Virtualization removes these boundaries.
򐂰 Choose lower-cost disk arrays. Less-demanding applications can easily run on cheaper
disks with lower performance. IBM Spectrum Virtualize automatically and transparently
Easy Tier moves data up and down between low-performance and high-performance disk
arrays (tiering).
򐂰 IBM Spectrum Cloud FlashCopy®. IBM Spectrum Virtualize has an interface to the
external CSPs. IBM Spectrum Cloud FlashCopy supports full and incremental backup and
restore from cloud snapshots.
򐂰 Quick adoption of new technologies. IBM Spectrum Virtualize seamlessly integrates
invention in storage technologies, such as new array types and new disk vendors.
򐂰 Extended high availability (HA). Cross-site virtualization, workload migration, and copy
services enhance options for deployment of HA scenarios or disaster recovery (DR)
solutions (IBM SAN Volume Controller Enhanced Stretched Cluster and IBM
HyperSwap®).

The importance of addressing the complexity of managing storage networks is clearly visible
in the results of industry analyses of the total cost of ownership (TCO) of storage networks.
Typically, storage costs are only about 20% of the TCO. Most of the remaining costs relate to
managing storage systems.

With IBM Spectrum Virtualize, free space does not need to be maintained and managed in
each storage subsystem, which further increases capacity usage.

Chapter 1. Introduction and system overview 3


In a non-virtualized storage environment, every system is an “island” that must be managed
separately. In large SAN environments, the sheer number of separate and different
management interfaces and lack of a unified view of the whole environment jeopardizes an
organization’s ability to manage their storage as a single entity and maintain the current view
of the system state.

IBM Storwize and IBM FlashSystem servers running IBM Spectrum Virtualize Software
reduce the number of separate environments that must be managed down to a single system.
After the initial configuration of the back-end storage subsystems, all of the day-to-day
storage management operations are performed by using a single GUI. Concurrently,
administrators gain access to the rich function set that is provided by IBM Spectrum
Virtualize, where particular features are natively available on the virtualized storage systems.

1.2 Storage virtualization terminology


Storage virtualization is a term that is used extensively throughout the storage industry. It can
be applied to various technologies and underlying capabilities. In reality, most storage devices
technically can claim to be virtualized in one form or another. Therefore, this chapter starts by
defining the concept of storage virtualization as it is used in this book.

We describe storage virtualization in the following way:


򐂰 Storage virtualization is a technology that makes one set of resources resemble another
set of resources, preferably with more wanted characteristics.
򐂰 Storage virtualization is a logical representation of resources that is not constrained by
physical limitations and hides part of the complexity. It also adds or integrates new
functions with services, and can be nested or applied to multiple layers of a system.

The virtualization model consists of the following layers:


򐂰 Application: The user of the storage domain.
򐂰 Storage domain:
– File, record, and namespace virtualization and file and record subsystem
– Block virtualization
– Block subsystem

Applications typically read and write data as vectors of bytes or records. However, storage
presents data as vectors of blocks of a constant size (512 or in the newer devices, 4096 bytes
per block).

The file, record, and namespace virtualization and file and record subsystem layers convert
records or files that are required by applications to vectors of blocks, which are the language
of the block virtualization layer. The block virtualization layer maps requests of the higher
layers to physical storage blocks, which are provided by storage devices in the block
subsystem.

Each of the layers in the storage domain abstracts away complexities of the lower layers and
hides them behind an easy-to-use, standard interface that is presented to upper layers. The
resultant decoupling of logical storage space representation and its characteristics that are
visible to servers (storage consumers) from underlying complexities and intricacies of storage
devices is a key concept of storage virtualization.

4 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The focus of this publication is block-level virtualization at the block virtualization layer,
which is implemented by IBM as IBM Spectrum Virtualize Software that is running on SAN
Volume Controller, and the IBM FlashSystem and IBM Storwize families. The SAN Volume
Controller is implemented as a clustered appliance in the storage network layer. The IBM
FlashSystem and IBM Storwize families are deployed as modular storage systems that can
virtualize their internally and externally attached storage.

IBM Spectrum Virtualize uses the Small Computer System Interface (SCSI) protocol to
communicate with its clients and presents storage space as SCSI logical units (LUs), which
are identified by SCSI logical unit numbers (LUNs).

Note: Although LUs and LUNs are different entities, the term LUN in practice is often used
to refer to a logical disk, that is, an LU.

Although most applications do not directly access storage but work with files or records, the
operating system (OS) of a host must convert these abstractions to the language of storage,
that is, vectors of storage blocks that are identified by logical block addresses (LBAs) within
an LU.

Inside IBM Spectrum Virtualize, each of the externally visible LUs is internally represented by
a volume, which is an amount of storage that is taken out of a storage pool. Storage pools are
made of managed disks (MDisks), that is, they are LUs that are presented to the storage
system by external virtualized storage or arrays that consist of internal disks. LUs that are
presented to IBM Spectrum Virtualize by external storage usually correspond to RAID arrays
that are configured on that storage.

With storage virtualization, you can manage the mapping between logical blocks within an LU
that is presented to a host, and blocks on physical drives. This mapping can be as simple or
as complicated as required by a use case. A logical block can be mapped to one physical
block or for increased availability, multiple blocks that are physically stored on different
physical storage systems, and in different geographical locations.

Importantly, the mapping can be dynamic: With Easy Tier, IBM Spectrum Virtualize can
automatically change underlying storage to which groups of blocks (extent) are mapped to
better match a host’s performance requirements with the capabilities of the underlying
storage systems.

IBM Spectrum Virtualize gives a storage administrator a wide range of options to modify
volume characteristics: from volume resize to mirroring, creating a point-in-time (PiT) copy
with IBM FlashCopy®, and migrating data across physical storage systems. Importantly, all
the functions that are presented to the storage users are independent from the characteristics
of the physical devices that are used to store data. This decoupling of the storage feature set
from the underlying hardware and ability to present a single, uniform interface to storage
users that masks underlying system complexity is a powerful argument for adopting storage
virtualization with IBM Spectrum Virtualize.

IBM Spectrum Virtualize includes the following key features:


򐂰 Simplified storage management by providing a single management interface for multiple
storage systems and a consistent user interface for provisioning heterogeneous storage.
򐂰 Online volume migration. IBM Spectrum Virtualize enables moving the data from one set
of physical drives to another set in a way that is not apparent to the storage users and
without over-straining the storage infrastructure. The migration can be done within a
specific storage system (from one set of disks to another set) or across storage systems.
Either way, the host that uses the storage is not aware of the operation, and no downtime
for applications is needed.

Chapter 1. Introduction and system overview 5


򐂰 Enterprise-level Copy Services functions. Performing Copy Services functions within IBM
Spectrum Virtualize removes dependencies on the capabilities and intercompatibility of
the virtualized storage subsystems. Therefore, it enables the source and target copies to
be on any two virtualized storage subsystems.
򐂰 Improved storage space usage because of the pooling of resources across virtualized
storage systems.
򐂰 Opportunity to improve system performance as a result of volume striping across multiple
virtualized arrays or controllers, and the benefits of cache that is provided by IBM
Spectrum Virtualize hardware.
򐂰 Improved data security by using data-at-rest encryption.
򐂰 Data replication, including replication to cloud storage by using advanced copy services
for data migration and backup solutions.
򐂰 Data reduction techniques for space efficiency, such as thin provisioning, Data Reduction
Pools (DRPs), deduplication, and IBM Real-time Compression (IBM RtC) Appliance.
Today, open systems typically use less than 50% of the provisioned storage capacity. IBM
Spectrum Virtualize can enable significant savings, increase the effective capacity of
storage systems up to five times, and decrease the floor space, power, and cooling that
are required by the storage system.

Note: IBM RtC is available only for earlier generation IBM Storwize V5000 and V7000
systems. The newer SAN Volume Controllers, the IBM FlashSystem 9100, 9200, 7200
and the IBM Storwize 5100 and V7000 Gen3 systems, do not support IBM RtC. They
support software compression only through DRPs.

IBM Storwize and IBM FlashSystem families are scalable solutions running on a HA platform
that can use diverse back-end storage systems to provide all the benefits to various attached
hosts.

Summary
Storage virtualization is a fundamental technology that enables the realization of flexible and
reliable storage solutions. It helps enterprises to better align IT architecture with business
requirements, simplify their storage administration, and facilitate their IT departments efforts
to meet business demands.

IBM Spectrum Virtualize running on IBM Storwize and IBM FlashSystem families is a mature,
10th-generation virtualization solution that uses open standards and complies with the SNIA
storage model. All the products are appliance-based storage, and use in-band block
virtualization engines that move the control logic (including advanced storage functions) from
a multitude of individual storage devices to a centralized entity in the storage network.

IBM Spectrum Virtualize can improve the usage of your storage resources, simplify storage
management, and improve the availability of business applications.

6 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.3 Latest changes and enhancements
IBM Spectrum Virtualize V8.3.1 is another step in the product line development that brings
new features and enhancements. The major software changes in subsequent code releases
are described in this section.

The IBM Spectrum Virtualize V8.3.1 code brought the following changes:
򐂰 Ownership groups:
– An ownership group defines a subset of users and objects within the system. You can
create ownership groups to further restrict access to specific resources that are
defined in the ownership group. Only users with Security Administrator roles can
configure and manage ownership groups.
– Ownership groups restrict access for users in the ownership group to only those
objects that are defined within that ownership group. An owned object can belong to
one ownership group. Users in an ownership group are restricted to viewing and
managing objects within their ownership group. Users that are not in an ownership
group can continue to view or manage all the objects on the system based on their
defined user role, including objects within ownership groups. When the user within an
ownership group logs on to the management GUI or command-line interface (CLI),
only resources that they have access through the ownership group are available.
The following objects can be assigned to ownership groups:
• Child pools
• Volumes
• Volume groups
• Hosts
• Host clusters
• Host mappings
• FlashCopy mappings
• FlashCopy consistency groups
• User groups
򐂰 Priority flow control (PFC).
PFC is an Ethernet protocol that supports the ability to select the priority of different types
of traffic within the network. With PFC, administrators can reduce network congestion by
slowing or pausing certain classes of traffic on ports, thus providing better bandwidth for
more important traffic. The system supports PFC on various supported Ethernet-based
protocols on three types of traffic classes: system, host attachment, and storage traffic.
򐂰 Support for IBM Easy Tier overallocation limit for pools with IBM FlashCore® Module
devices as the top tier of storage.
The system includes IBM Easy Tier, which is a function that responds to the presence of
drives in a storage pool that also contains hard disk drives (HDDs). The system
automatically and nondisruptively moves frequently accessed data from HDD MDisks to
flash-based storage MDisks, thus placing such data in a faster tier of storage.
The system supports these tiers:
– Storage-class memory (SCM).
SCM tier exists when the pool contains drives that use persistent memory technologies
that improve the endurance and speed of current flash storage device technologies.
– Tier 0 flash.
The tier 0 flash tier exists when the pool contains high-performance flash drives.

Chapter 1. Introduction and system overview 7


– Tier 1 flash.
The tier 1 flash tier exists when the pool contains tier 1 flash drives. Tier 1 flash drives
typically offer larger capacities, but slightly lower performance and write endurance
characteristics.
– Enterprise tier.
The enterprise tier exists when the pool contains enterprise-class MDisks, which are
disk drives that are optimized for performance.
– Nearline (NL) tier.
The NL tier exists when the pool contains NL-class MDisks, which are disk drives that
are optimized for capacity.
򐂰 Easy Tier automatic data placement requirements, recommendations, and limitations.
򐂰 A hardware upgrade from a IBM FlashSystem 9100 Model AF7 to a Model AF8 is now
available.
򐂰 Support for 32 GB Fibre Channel (FC) PCIe adapters.
򐂰 Support for expanding distributed arrays.
You can dynamically expand distributed arrays to increase the available capacity of the
array or create more rebuild space. As part of the expansion, the system automatically
migrates data for optimal performance for the new expanded configuration. Expansion of
distributed arrays support the incremental growth of the available capacity for arrays and
are compatible with other functions, such as IBM Easy Tier and data migrations.
򐂰 Support for pool-level volume protection.
Volume protection prevents active volumes or host mappings from being deleted
inadvertently if the system detects recent I/O activity. This global setting is enabled by
default on new systems. You can either set this value to apply to all volumes that are
configured on your system, or control whether the system-level volume protection is
enabled or disabled on specific pools.
򐂰 Support for Simple Network Management Protocol (SNMP) protocol Version 3 enhanced
security features:
– SNMP is a standard protocol for managing networks and exchanging messages. The
system can send SNMP messages that notify personnel about an event. You can use
an SNMP manager to view the SNMP messages that the system sends. The system
supports both SNMP Version 2 and Version 3.
– Some systems support setting up SNMP notifications for events. Event notifications
are reported to the SNMP destinations of your choice. To specify an SNMP destination,
you must provide a valid Internet Protocol (IP) address. A maximum of six SNMP
destinations can be specified. For SNMP Version 2 servers, the community string is
required and the default value is public. You can use the Management Information
Base (MIB) file for SNMP to configure a network management program to receive
SNMP messages that are sent by the system. This file can be used with SNMP
messages from all versions of the software.
򐂰 Support for enhanced auditing features for syslog servers.
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The system can send syslog messages that notify personnel
about an event. You can set up syslog event notifications with either the management GUI
or the CLI.

8 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Enhanced password security.
The user must change the default password to a different password for the first login or
system setup. If the password is ever reset to the default password, the user must
immediately change the password to a different password.
򐂰 Improvements to the terms and definitions that relate to capacity were updated.
򐂰 Support for the new SCM technology (such as Optane and 3DXP).
򐂰 Three-site replication with limited availability at general availability (GA) and subject to
RPQ initially.
Data is replicated from the primary site to two alternative sites. The remaining two sites
are aware of the difference between themselves, which ensures that if there is a disaster
at any one of the sites, the remaining two sites can establish a consistent_synchronized
Remote Copy (RC) relationship among themselves with minimal data transfer, that is,
within the expected recovery point objective (RPO).
򐂰 Secure Drive Erase is the ability to erase any customer data from a Non-Volatile Memory
Express (NVMe) or serial-attached SCSI (SAS) solid-state drive (SSD) drive before it is
removed from either the control and expansion enclosures.

1.4 IBM Spectrum Virtualize family


IBM Spectrum Virtualize is a software-enabled storage virtualization engine that provides a
single point of control for storage resources within the data centers. IBM Spectrum Virtualize
is a core software engine of well-established and industry-proven IBM storage virtualization
solutions. These solutions include SAN Volume Controller and all versions of IBM Storwize
and IBM FlashSystem family of products.

1.4.1 IBM FlashSystem simplified terminology


The IBM Spectrum Storage™ family, running IBM Spectrum Virtualize Software, has been
simplified to reflect the new industry drive towards all flash arrays.

Chapter 1. Introduction and system overview 9


Figure 1-2 shows the former names of both IBM Storwize and IBM FlashSystem systems,
versus the new IBM FlashSystem names. Existing earlier generation systems (Generation 1
products) still retain the brand name IBM Storwize.

Figure 1-2 IBM FlashSystem simplified terminology

Naming: With the introduction of the IBM Spectrum Storage family, the software that runs
on IBM SAN Volume Controller and IBM Storwize family products is called IBM Spectrum
Virtualize. The name of the underlying hardware platform remains intact.

The objectives of IBM Spectrum Virtualize are to manage storage resources in your IT
infrastructure and ensure that they are used to the advantage of your business. These
processes take place quickly, efficiently, and in real time while avoiding increases in
administrative costs.

IBM Spectrum Virtualize is the core software engine of the entire family of IBM FlashSystem
products, and the information in this book relates to the deployment considerations of the
following products:
򐂰 IBM FlashSystem 5100
򐂰 IBM FlashSystem 7200
򐂰 IBM FlashSystem 9100
򐂰 IBM FlashSystem 9200

10 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.5 IBM FlashSystem 5100 overview
The IBM FlashSystem 5100 system is a virtualized, SDS system that is designed to
consolidate workloads into a single storage system for ease of management, reduced costs,
highly scalable capacity, high performance, and availability. The IBM FlashSystem 5100
Models 4H4, 4F4, and UHB deliver improved latency and performance with the
implementation of NVMe technology.

The IBM FlashSystem 5100 Small Form Factor (SFF) Control Enclosure Models 4H4, 4F4,
and UHB feature the following components:
򐂰 Two node canisters, each with an 8-core processor and integrated hardware-assisted
compression acceleration
򐂰 64 GB cache (32 GB per canister) standard with the option of 192 GB - 576 GB (per
system)
򐂰 Eight 10-Gigabit Ethernet (GbE) ports standard for internet Small Computer Systems
Interface (iSCSI) connectivity
򐂰 16 Gb or 32 Gb FC connectivity options with NVMe over Fibre Channel (FC-NVMe)
support
򐂰 25 GbE connectivity options with iSCSI and iSCSI Extensions for Remote Direct Memory
Access (RDMA) (iSER) (RoCe V2 and internet Wide Area RDMA Protocol (iWARP))
support
򐂰 Support for up to twenty-four 2.5-inch NVMe flash drives
򐂰 Support for up to four SCM class drives
򐂰 2U 19-inch rack-mounted enclosure

The IBM FlashSystem 5100 Model 4H4 attaches to Expansion Enclosure Models 12G, 24G,
and 92G, which support SAS flash drives and SAS HDD drives.

Note: Attachment and intermixing of existing Storwize V5100 Expansion Enclosure


Models 12F, 24F, AFF, A9F, and 92F is supported by IBM FlashSystem 5100 Models 4H4
and UHB.

The IBM FlashSystem 5100 Model 4F4 attaches to Expansion Enclosure Models AFG and
A9G, which support SAS flash drives.

Note: The IBM FlashSystem 5100 Model AF4 is also known as the IBM FlashSystem
5100F.

IBM 2078 Model UHB is the IBM FlashSystem 5100 hardware component that is used in the
Storage Utility Offering space. It is physically and functionally identical to the IBM
FlashSystem 5100 Models 4H4 and 4F4, except for target configurations and variable
capacity billing. The variable capacity billing uses IBM Storage Insights to monitor the system
usage, enabling allocated storage usage above a base subscription rate to be billed per
terabyte per month.

Allocated storage is identified as storage that is allocated to a specific host (and unusable to
other hosts), whether data is written or not. For thin provisioning, the data that is written is
considered used. For thick provisioning, total allocated volume space is considered used.

Chapter 1. Introduction and system overview 11


IBM FlashCore Module (FCM) drives integrate IBM MicroLatency® technology, advanced
flash management, and reliability into a 2.5-inch SFF NVMe, with built-in,
performance-neutral hardware compression and encryption.

The following 2.5-inch SFF NVMe FCM drives are supported in the IBM FlashSystem 5100
4H4, 4F4, and UHB Control Enclosures:
򐂰 4.8 TB NVMe FCM
򐂰 9.6 TB NVMe FCM
򐂰 19.2 TB NVMe FCM
򐂰 38.4 TB NVMe FCM

For more information about the supported drive types, see 1.13, “IBM FlashCore Module
drives, NVMe SSDs, and SCM drives” on page 67.

The following 2.5-inch SFF NVMe industry-standard drives are supported in the IBM
FlashSystem 5100 4H4, 4F4, and UHB Control Enclosures:
򐂰 800 GB 2.5-inch 3 Drive Write Per Day (DWPD) NVMe flash drive
򐂰 1.92 TB 2.5-inch NVMe flash drive
򐂰 3.84 TB 2.5-inch NVMe flash drive
򐂰 7.68 TB 2.5-inch NVMe flash drive
򐂰 15.36 TB 2.5-inch NVMe flash drive

All drives are dual-port and hot-swappable. Drives can be intermixed where applicable.
Expansion enclosures can be intermixed behind the SFF control enclosure.

1.5.1 IBM FlashSystem 5100 Model 4H4 supported hardware


Model 4H4 supports the following expansions:
򐂰 Model 12G/24G 2U 12-drive or 24-drive
򐂰 Model 92G 5U 92-drive

The following 2.5-inch flash drives are supported:


򐂰 800 GB, 1.6 TB, 1.92 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB
򐂰 The following 2.5-inch disk drives are supported:
– 600 GB and 900 GB 15,000 RPM SAS drives
– 600 GB, 900 GB, 1.2 TB, 1.8 TB, and 2.4 TB 10,000 RPM SAS disks
– 2 TB 7,200 RPM NL-SAS disk

The following 3.5-inch disk drives are supported:

4 TB, 6 TB, 8 TB, 10 TB, 12 TB, and 14 TB 7,200 RPM NL-SAS disks

1.5.2 IBM FlashSystem 5100 Model 4F4 supported hardware


Model 4F4 supports the following expansions:
򐂰 Model AFG 2U 24-drive
򐂰 Model A9G 5U 92-drive

The following 2.5-inch flash drives are supported:

800 GB, 1.6 TB, 1.92 TB, 3.84 TB, 7.68 TB, 15.36 TB, and 30.72 TB

12 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-3 shows the front view of the IBM FlashSystem 5100 Control Enclosure.

Figure 1-3 Front view of an IBM FlashSystem 5100 Control Enclosure with 24 SSD drives

Figure 1-4 shows the rear view of a IBM FlashSystem 5100 Control Enclosure.

Figure 1-4 Rear view of an IBM FlashSystem 5100 Control Enclosure

1.5.3 Supported fabric protocols


Table 1-1 shows the supported fabric protocol.

Table 1-1 Supported fabric protocol


Platform Fibre Channel iSCSI ISER FC-NVMeoF

5100 Yes Yesa Yesb Yes


a. 10 GbE and 25 GbE (10 GbE supports only iSCSI.)
b. 25 GbE only

1.5.4 Host interface cards


There are two PCIe adapter slots. Table 1-2 shows the supported card combinations (nodes
in the same I/O group must match).

Table 1-2 Supported card combinations


Supported Ports Protocol Slot Note
number of positions
cards

0-1 4 16 Gb FC 2

0-1 4 32 Gb FC 2

0-1 2 25 Gb Ethernet 2
(iWARP)

Chapter 1. Introduction and system overview 13


Supported Ports Protocol Slot Note
number of positions
cards

0-1 2 25 Gb Ethernet 2
(RDMA over
Converged Ethernet
(RoCE))

0-1 2 12 Gb SAS Expansion 1 򐂰 Expansion only, no SAS host


attached
򐂰 Four-port card, but only two
are active

1.5.5 On board ports


Table 1-3 shows the onboard ports.

Table 1-3 Onboard ports


Onboard Ethernet Port Speed Function

1 10 GbE Management IP, Service IP, and Host I/O (iSCSI only)

2 10 GbE Secondary Management IP and Host I/O (iSCSI only)

3 10 GbE Host I/O (iSCSI only)

4 10 GbE Host I/O (iSCSI only)

T 10 GbE Technician Port: DHCP / domain name server (DNS) for


direct attach service management

Figure 1-5 shows all of the connectors of a IBM FlashSystem 5100 control bottom canister.

Figure 1-5 Connectors on a IBM FlashSystem 5100 control bottom canister

This section provides an overview of the data management and protection features that are
available on the IBM FlashSystem 5100 systems.

14 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The following aspects are covered:
򐂰 IBM Spectrum Virtualize Software
򐂰 Feature set comparison
򐂰 Licensing
򐂰 Storage migration and management
򐂰 Features for storage efficiency and data reduction
򐂰 Features for enhanced data security
򐂰 Copy services and HyperSwap
򐂰 Features for application integration
򐂰 Features for manageability

1.5.6 IBM Spectrum Virtualize Software


An IBM FlashSystem 5100 Control Enclosure consists of two node canisters that each run
IBM Spectrum Virtualize Software, which is part of the IBM Spectrum Storage family.

IBM Spectrum Virtualize combines various IBM technologies, including deduplication,


compression, thin provisioning, distributed redundant array of independent disks (DRAID)
arrays, SCSI UNMAP, HyperSwap (a HA solution), IBM Easy Tier (automatic and dynamic
data tiering), encryption of internal and external virtualized storage, FlashCopy (snapshot),
and remote data replication. These technologies enable the Storwize family systems to
provide a rich set of functional capabilities and deliver extraordinary levels of storage
efficiency.

The IBM FlashSystem 5100 system requires IBM Spectrum Virtualize Software V8.2.1 or
later for operation.

1.5.7 Feature set comparison


Even though all of the Storwize family systems are running the same IBM Spectrum Virtual
software, the feature set that is available with each of the models is different. Table 1-4 shows
the feature set that is provided by the IBM FlashSystem 5100 systems. Each of the features is
described in more detail in further sections of this chapter.

Table 1-4 IBM FlashSystem 5100 systems features chart


Feature IBM FlashSystem 5100 system

External virtualization Data migration and virtualization

Traditional RAID No RAID support on FCM drives, with RAID 10


on industry-standard NVMe drives

DRAID DRAID 5 and DRAID 6

Easy Tier Yes, with enclosure-based license

DRP Yes

Thin provisioning Yes

Data compression Yes, and requires an enclosure-based license

Data deduplication Yes

Hardware encryption Yes, with a key-based license

Software encryption Yes, for use on virtualized storage with a


key-based license

Chapter 1. Introduction and system overview 15


Feature IBM FlashSystem 5100 system

Volume mirroring Yes

FlashCopy (snapshot) Yes, with enclosure-based license

Remote mirroring Yes, with an enclosure-based license

HyperSwap Yes

VMware vSphere Virtual Volumes (VVOLs) Yes

IBM Storage Insights Yes

1.5.8 Licensing
All IBM FlashSystem 5100 functional capabilities are provided through IBM Spectrum
Virtualize Software.

The base license that is provided with the system includes its basic functions. However, there
are also extra licenses that can be purchased to expand the capabilities of the system.
Administrators are responsible for purchasing extra licenses and configuring the systems
within the license agreement, which includes configuring the settings of each licensed
function.

IBM FlashSystem 5100 licenses (enclosure-based)


IBM FlashSystem 5100 systems employ a license scheme that uses certain licensed
functions that are based on the number of enclosures that are indicated in the license. The
system supports the following licensed functions:
򐂰 External virtualization
The system does not require a license for its own control and expansion enclosures, but a
license is required for each enclosure of any external systems that are being virtualized.
Data can be migrated from existing storage systems to a system that uses the external
virtualization function within 45 days of purchase of the system without purchase of a
license.
After 45 days, any ongoing use of the external virtualization function requires a license for
each enclosure in each external system. The system does not require an external
virtualization license for external enclosures that are being used only to provide MDisks for
a quorum disk and are not providing any capacity for volumes.
򐂰 Remote mirroring
The remote mirroring (remote copy) function sets up a relationship between two volumes
so that updates that are made by an application to one volume are mirrored on the other
volume. This function is licensed per enclosure. You can use the remote mirroring
functions on the total number of enclosures that are licensed.
The total number of enclosures must include the enclosures on external storage systems
that are licensed for virtualization and the number of control and expansion enclosures
that are part of your local system. The remote mirroring option must be acquired for both
the primary (local) and secondary (remote) systems. If the IBM FlashSystem 5100 system
is mirrored to a system that is not an IBM FlashSystem 5100 system, the other system
must have the appropriate and applicable license for remote mirroring.

16 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Compression
The compression function requires a separately orderable license that is set on a per
enclosure basis. One license is required for each control or expansion enclosure and each
enclosure in any external storage systems that use virtualization. With the compression
function, data is compressed as it is written to disk, saving extra capacity for the system.

The FlashCopy function also requires a license to use, but it does not require any input on the
system. For auditing purposes, retain the license agreement for proof of compliance.

In addition to these enclosure-based licensed functions, the system also supports encryption
through a key-based license.

If you use a trial license, the system warns you when the trial is about to expire at regular
intervals. If you do not purchase and activate the license on the system before the trial license
expires, all configurations that use the trial licenses are suspended.

Encryption license (key-based)


Encryption is enabled on IBM FlashSystem 5100 systems by obtaining the Encryption
Enablement feature. This feature enables encryption on the entire Storwize family system and
externally virtualized storage subsystems.

The encryption feature uses a key-based license that is activated by an authorization code.
The authorization code is sent with the IBM FlashSystem 5100 Licensed Function
Authorization documents that you receive after purchasing the license.

The Encryption USB Flash Drives (Four Pack) feature or IBM Security™ Key Lifecycle
Manager (IBM SKLM) are required for encryption keys management.

1.5.9 Storage migration and management


The IBM FlashSystem 5100 system offer various technologies for managing internal storage:
NVMe FCM drives and NVMe drives on IBM FlashSystem 5100, plus high-performance SAS
disk drives, high-capacity NL-SAS disk drives, and flash (solid-state) drives on expansion
enclosures.

The external virtualization feature of the Storwize family systems makes the migration of data
from one storage device to another easier. You can use it to manage other IBM or third-party
storage arrays in the same manner as the capacity on internal drives or modules.

1.5.10 Data migration by using the external virtualization feature


The IBM FlashSystem 5100 family models can migrate data from external storage controllers,
including migrating from any other IBM or third-party storage systems. The IBM FlashSystem
5100 system uses the functions that are provided by its external virtualization capability to
perform the migration. This capability places external LUs under the control of a Storwize
family machine. After that, hosts continue to access them through the IBM FlashSystem
5100, which acts as a proxy.

Chapter 1. Introduction and system overview 17


The migration process typically consists of the following steps:
1. Input/output (I/O) to the LUs that exist on the external storage system must be stopped,
and changes must be made to the mapping of the storage system so that the original LUs
are presented directly to the Storwize family machine and not to the hosts. The IBM
FlashSystem 5100 system discovers the external LUs and recognizes them as
unmanaged external storage back-end devices (MDisks).
2. The unmanaged MDisks are then imported to the IBM FlashSystem 5100 image mode
volumes and placed in a migration storage pool. This storage pool is now a logical
container for the externally attached LUs. Each volume has a one-to-one mapping with an
external LU.
From a data perspective, the image mode volume represents the SAN-attached LUs
exactly as they were before the import operation. The image mode volumes are on the
same physical drives of the storage system, and the data remains unchanged.
3. Configure your hosts for the IBM FlashSystem 5100 attachment, configure host objects on
IBM FlashSystem 5100 system, and map image mode volumes to them. After the volumes
are mapped, the hosts discover their volumes and are ready to continue working with
them, so I/O can be resumed.
4. Migrate image mode volumes to the internal storage of the IBM FlashSystem 5100 system
by using the volume mirroring feature. Mirrored copies are created online, so a host can
still access and use the volumes during the mirror synchronization process.
5. After the mirror operations are complete, the image mode volumes are removed (deleted),
and the external storage system can be disconnected and decommissioned or reused
elsewhere.

The IBM FlashSystem 5100 system GUI provides a storage migration wizard, which simplifies
the migration task. The wizard features intuitive steps that guide users through the entire
process.

1.5.11 External storage virtualization


You can use the IBM FlashSystem 5100 system to manage the capacity of other disk systems
with external storage virtualization. When the IBM FlashSystem 5100 system virtualizes a
storage system, its capacity is managed similarly to internal disk drives or flash modules.
Capacity in external storage systems inherits all of the rich functions and ease of use of the
IBM FlashSystem 5100 system.

You can use the IBM FlashSystem 5100 system to preserve your existing investments in
storage, centralize management, and make storage migrations easier with storage
virtualization and Easy Tier. Virtualization helps insulate applications from changes that are
made to the physical storage infrastructure.

To virtualize external storage with the IBM FlashSystem 5100 system, map its LUs to the
system and add them to a storage pool as MDisks. After that, you can create and manage
volumes from capacity that is provisioned from external systems.

To verify whether your storage can be virtualized by the IBM FlashSystem 5100 system, see
the IBM System Storage Interoperation Center (SSIC).

18 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.5.12 Managing internal storage
The internal storage of the IBM FlashSystem 5100 system consists of NVMe-attached drives,
FCM drives, or SAS and flash SSD type drives for the expansion enclosures. For more
information about supported drive types, see 1.5, “IBM FlashSystem 5100 overview” on
page 11.

To use the internal IBM FlashSystem 5100 drives in storage pools, they must be joined as
RAID arrays to form array-type MDisks.

RAID fulfills two key design goals:


򐂰 Increased data reliability
򐂰 Increased input/output (I/O) performance

The IBM FlashSystem 5000 family systems support two RAID types: traditional RAID and
DRAID. However, the IBM FlashSystem 5100 supports only DRAID.

In a traditional RAID approach, data is spread among drives in an array. However, the spare
space is constituted by spare drives, which sit outside of the array. Spare drives are idling,
and do not share any I/O load that comes to an array. When one of the drives within the array
fails, all data is read from the mirrored copy (for RAID 10) or is calculated from the remaining
data stripes and parity (for RAID 5 or RAID 6) and written to a single spare drive.

In DRAID, spare capacity is used instead of the idle spare drives from a traditional RAID. The
spare capacity is spread across the disk drives. Because no drives are idling, all drives
contribute to array performance. If there is a drive failure, the rebuild load is distributed across
multiple drives, which addresses two main disadvantages of a traditional RAID approach:
򐂰 It reduces rebuild times by eliminating the bottleneck of one drive.
򐂰 It increases the array performance by increasing the number of drives sharing the
workload.

FCM drives that are installed in the IBM FlashSystem 5100 system can be aggregated into
both DRAID 6 and DRAID 5 arrays, but DRAID 6 is recommended. FCM drives in the same
RAID array must have the same capacity. RAID 0, 1, traditional RAID 5 and 6, and 10 are not
supported on NVMe FCM drives.

Industry-standard NVMe drives in an IBM FlashSystem 5100 system can be aggregated into
DRAID 6 and DRAID 5 arrays, and also can form RAID 1 and RAID 10 arrays. Traditional
RAID 5 and 6 are not supported. Industry-standard NVMe drives in the same RAID array
must be of the same capacity.

RAID arrays of some levels can be created only with a system CLI, not the GUI.

Table 1-5 summarizes the supported drives, array types, and RAID levels.

Table 1-5 Summary of supported drives, array types, and RAID levels
Supported drives Non-distributed arrays (traditional RAID) DRAID arrays

RAID 0 RAID 1 RAID 5 RAID 6 RAID 10 RAID 5 RAID 6

2.5-inch and 3.5-inch SAS Yes Yes No No Yes Yes Yes


disk drives and SSDs

V5100 industry-standard Yes Yes No No Yes Yes Yes


NVMe drives

FCM drives No No No No No Yes Yes

Chapter 1. Introduction and system overview 19


Note: IBM advises using DRAID 6 for all drive types.

1.6 Features for storage efficiency and data reduction


IBM Spectrum Virtualize Software running in the IBM FlashSystem 5100 system offers
several functions for storage optimization and efficiency.

Easy Tier
Many applications exhibit a significant skew in the distribution of I/O workload. A small fraction
of the storage is responsible for a disproportionately large fraction of the total I/O workload of
an environment.

Easy Tier acts to identify this skew and automatically place data to take advantage of it. By
moving the hottest data onto the fastest tier of storage, the workload on the remainder of the
storage is significantly reduced. By servicing most of the application workload from the fastest
storage, Easy Tier acts to accelerate application performance.

Easy Tier is a performance optimization function that automatically migrates (or moves)
extents that belong to a volume among different storage tier based on their I/O load.
Movement of the extents is online and unnoticed from the host perspective.

As a result of extent movement, the volume no longer has all its data in one tier, but rather in
two or three tiers. Each tier provides optimal performance for the extent, as shown in
Figure 1-6.

Figure 1-6 Easy Tier concept

Easy Tier monitors the I/O activity and latency of the extents on all Easy Tier enabled storage
pools to create heat maps. Based on them, Easy Tier creates an extent migration plan and
promotes (moves) high activity or hot extents to a higher disk tier within the same storage
pool. It also demotes extents whose activity dropped off, or cooled, by moving them from a
higher disk tier MDisk back to a lower tier MDisk.

20 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Storage pools that contain only one tier of storage can also benefit from Easy Tier if they have
multiple disk arrays (or MDisks). Easy Tier has a balancing mode: it moves extents from busy
disk arrays to less busy arrays of the same tier, balancing I/O load.

All arrays and MDisks belong to one of the tiers. They are classified as tier 0 flash, tier 1 flash,
enterprise, or NL tier. The system supports both tier 0 and tier 1 flash drives. Tier 0 flash
drives are higher-cost flash drives that provide high performance for read and write
operations. Tier 1 flash drives are lower-cost flash drives that have large capacity but lower
performance and write endurance characteristics.

Easy Tier is supported on IBM FlashSystem 5100 systems, but requires the appropriate
license to be installed and configured.

Data reduction and UNMAP


The UNMAP feature is a set of SCSI primitives that enables hosts to indicate to a SCSI target
(storage system) that space that is allocated to a range of blocks on a target storage volume
is no longer required. This command enables the storage controller to take measures and
optimize the system so that the space can be reused for other purposes.

The most common use case, for example, is a host application, such as VMware, freeing
storage in a file system. Then, the storage controller can perform functions to optimize the
space, such as reorganizing the data on the volume so that space is better used.

When a host allocates storage, the data is placed in a volume. To free the allocated space
back to the storage pools, the SCSI UNMAP feature is used. UNMAP enables host OSs to
deprovision storage on the storage controller so that the resources can automatically be freed
in the storage pools and used for other purposes.

A DRP increases infrastructure capacity usage by using new efficiency functions and
reducing storage costs. By using the end-to-end SCSI UNMAP function, a DRP can
automatically de-allocate and reclaim the capacity of thin-provisioned volumes that contain
deleted data so that this reclaimed capacity can be reused by other volumes.

At its core, a DRP uses a Log Structured Array (LSA) to allocate capacity. An LSA enables a
tree-like directory to be used to define the physical placement of data blocks independent of
size and logical location. Each logical block device has a range of LBAs, starting from 0 and
ending with the block address that fills the capacity.

When written, you can use an LSA to allocate data sequentially and provide a directory that
provides a lookup to match an LBA with a physical address within the array. Therefore, the
volume that you create from the pool to present to a host application consists of a directory
that stores the allocation of blocks within the capacity of the pool.

In DRPs, the maintenance of the metadata results in I/O amplification. I/O amplification
occurs when a single host-generated read or write I/O results in more than one back-end
storage I/O request because of advanced functions. A read request from the host results in
two I/O requests: a directory lookup and a data read. A write request from the host results in
three I/O requests: a directory lookup, a directory update, and a data write. This aspect must
be considered when sizing and planning your data-reducing solution.

Standard pools, which make up a classic solution that is also supported by the IBM
FlashSystem 5100 systems, do not use LSA. A standard pool works as a container that
receives its capacity from disk arrays (MDisks), splits it into extents of the same fixed size,
and allocates extents to volumes.

Chapter 1. Introduction and system overview 21


Standard pools do not cause I/O amplification and require less processing resource usage
compared to DRPs. In exchange, DRPs provide more flexibility and storage efficiency.

Table 1-6 provides an overview of volume capacity saving types that are available with
standard pools and DRPs.

Table 1-6 Volume types that are available in pools


Standard pool DRP

Fully allocated Yes Yes

Thin-provisioned Yes Yes

Thin-provisioned compressed No Yes

Thin-provisioned deduplicated No Yes

Thin-provisioned compressed No Yes


and deduplicated

This book provides only an overview of DRP aspects. For more information, see Introduction
and Implementation of Data Reduction Pools and Deduplication, SG24-8430.

Fully allocated and thin-provisioned volumes


Volumes can be configured as thin-provisioned or fully allocated. Both can be configured in
standard pools and DRPs.

In Storwize family systems, each volume has virtual capacity and real capacity parameters.
Virtual capacity is the volume storage capacity that is available to a host, and it is used by to
create a file system. Real capacity is the storage capacity that is allocated to a volume from a
pool. It shows the amount of space that is used on a physical storage volume.

Fully allocated volumes are created with the same amount of real capacity and virtual
capacity. This type uses no storage efficiency features.

When a fully allocated volume is created on a DRP, it bypasses the LSA structure and works
in the same manner as in a standard pool, so it has no processing impact and provides no
data reduction options at the pool level.

When using fully allocated volumes on the IBM FlashSystem 5100 system with FCM drives,
whether a DRP or standard pool is used, capacity savings are achieved by compressing data
with hardware compression that runs on FCM drives. Hardware compression on FCM drives
is always enabled. This configuration provides maximum performance in combination with
outstanding storage efficiency.

A thin-provisioned volume presents a different capacity to mapped hosts than the capacity
that the volume uses in the storage pool. Therefore, real and virtual capacities might not be
equal. The virtual capacity of a thin-provisioned volume is typically significantly larger than its
real capacity. As more information is written by the host to the volume, more of the real
capacity is used. The system identifies read operations to unwritten parts of the virtual
capacity, and returns zeros to the server without using any real capacity.

22 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
In a shared storage environment, thin provisioning is a method for optimizing the use of
available storage. Thin provisioning relies on the allocation of blocks of data on demand,
versus the traditional method of allocating all of the blocks up front. This method eliminates
almost all white space, which helps avoid the poor usage rates that occur in the traditional
storage allocation method where large pools of storage capacity are allocated to individual
servers but remain unused (not written to).

If a thin-provisioned volume is created in a DRP, the system monitors it for reclaimable


capacity from host unmap operations. This capacity can be reclaimed and redistributed into
the pool. Space that is freed from the hosts is a process that is called UNMAP. A host can
issue SCSI UNMAP commands when the user deletes files on a file system, which result in
the freeing of all of the capacity that is allocated within that unmapping. Similarly, deleting a
volume at the DRP level frees all of the capacity back to the pool.

A thin-provisioned volume in a standard pool cannot return unused capacity back to the pool
with SCSI UNMAP.

A thin-provisioned volume can be converted non-disruptively to a fully allocated volume, or


vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume, and then remove the fully allocated
copy from the volume after they are synchronized.

Note: It is not recommended to use non-compressed thin-provisioned volumes on storage


pools running on FCM drives. The system GUI prevents the creation of such types of
configurations.

1.6.1 Compression and deduplication


When using DRPs on the IBM FlashSystem 5100 system, host data can be compressed or
compressed and deduplicated before it is written to the disk drives.

The Storwize family DRP compression is based on the Lempel-Ziv lossless data compression
algorithm that operates by using a real-time method. When a host sends a write request, the
request is acknowledged by the write cache of the system, and then staged to the DRP.

As part of its staging, the write request passes through the compression engine and is stored
in a compressed format. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage. This process occurs transparently to host systems making them
unaware of the compression.

DRP compression is supported on the IBM FlashSystem 5100 system. This system’s node
canisters have a compression accelerator that is installed that increases the throughput of I/O
transfers between nodes and compressed volumes.

The IBM Comprestimator tool is available to check whether your data is compressible. It
estimates the space savings that are achieved when using compressed volumes. This utility
provides a quick and easy view of showing the benefits of using compression. IBM
Comprestimator can run from the Storwize family system GUI or CLI, and it checks data that
is already stored on the system. It is also available as a stand-alone, host-based utility that
can analyze data on IBM or third-party storage devices. It can be found at Comprestimator
Utility Version [Link].

Chapter 1. Introduction and system overview 23


Deduplication can be configured with thin-provisioned and compressed volumes in DRPs for
added capacity savings. The deduplication process identifies unique chunks of data, or byte
patterns, and stores a signature of the chunk for reference when writing new data chunks.

If the new chunk’s signature matches an existing signature, the new chunk is replaced with a
small reference that points to the stored chunk. The matches are detected when the data is
written. The same byte pattern might occur many times, resulting in the amount of data that
must be stored being greatly reduced.

Deduplication is supported on volumes that are provisioned from a DRP on IBM FlashSystem
5100 systems.

To help with the profiling and analysis of existing workloads that must be migrated to an
IBM FlashSystem 5100 system, IBM provides the Data Reduction Estimation Tool (DRET).
DRET is a highly accurate, command-line, and host-based utility for estimating the data
reduction savings on block storage devices. The tool scans target workloads on various
storage arrays (from IBM or another company), merges all scan results, and provides a data
reduction estimate.

Compression and deduplication are not mutually exclusive: One, both, or neither, features
can be enabled. If the volume is deduplicated and compressed, data is deduplicated first, and
then compressed. Therefore, deduplication references are created on the compressed data
that is stored on the physical domain.

1.6.2 Features for enhanced data security


To protect data against the potential exposure of sensitive user data and user metadata that is
stored on discarded, lost, or stolen storage devices, IBM FlashSystem 5100 systems support
encryption of data-at-rest.

Encryption is performed by the IBM FlashSystem 5100 Control Enclosure for data that is
stored within the entire Storwize family system, the IBM FlashSystem 5100 Control
Enclosure, all attached expansion enclosures, and for data that is stored as externally
virtualized by the IBM FlashSystem 5100 storage subsystems.

Encryption is the process of encoding data so that only authorized parties can read it. Data
encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses a
256-bit symmetric encryption key in XTS mode, as defined in the IEEE 1619-2007 standard
and NIST Special Publication 800-38E as XTS-AES-256.

There are two types of encryption on devices running IBM Spectrum Virtualize: hardware
encryption and software encryption. Which method is used for encryption is chosen
automatically by the system based on the placement of the data:
򐂰 Hardware encryption: Data is encrypted by using SAS hardware. It is used only for internal
storage (drives).
򐂰 Software encryption: Data is encrypted by using the nodes’ CPU (the encryption code
uses the AES-NI CPU instruction set). It is used only for external storage that is virtualized
by the IBM FlashSystem 5100 system.

Both methods of encryption use the same encryption algorithm, key management
infrastructure, and license.

Note: Only data-at-rest is encrypted. Host to storage communication and data that is sent
over links that are used for remote mirroring are not encrypted.

24 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The IBM FlashSystem 5100 system also supports self-encrypting drives, where data
encryption is completed in the drive itself.

Before encryption can be enabled, ensure that a license was purchased and activated.

The system supports two methods of configuring encryption:


򐂰 You can use a centralized key server that simplifies creating and managing encryption
keys on the system. This method of encryption key management is preferred for security
and simplification of key management.
A key server is a centralized system that generates, stores, and sends encryption keys to
the system. Some key server providers support replication of keys among multiple key
servers. If multiple key servers are supported, you can specify up to four key servers that
connect to the system over both a public network or a separate private network. The
system supports IBM SKLM or Gemalto SafeNet KeySecure key servers to handle key
management.
򐂰 In addition, the system supports storing encryption keys on USB flash drives. USB flash
drive-based encryption requires physical access to the systems and is effective in
environments with a minimal number of systems. For organizations that require strict
security policies regarding USB flash drives, the system supports disabling a canister’s
USB ports to prevent unauthorized transfer of system data to portable media devices. If
you have such security requirements, use key servers to manage encryption keys.

1.7 Copy services and HyperSwap


The system provides copy services functions that can be used for migration, and to improve
availability and support DR.

Volume mirroring
By using volume mirroring, a volume can have two physical copies in one IBM FlashSystem
5100 system. Each volume copy can belong to a different pool and use a different set of
capacity saving features.

When a host writes to a mirrored volume, the system writes the data to both copies. When a
host reads a mirrored volume, the system picks one of the copies to read. If one of the
mirrored volume copies is temporarily unavailable, the volume remains accessible to servers.
The system remembers which areas of the volume are written, and resynchronizes these
areas when both copies are available.

You can create a volume with one or two copies, and you can convert a non-mirrored volume
into a mirrored volume by adding a copy. When a copy is added in this way, the system
synchronizes the new copy so that it is the same as the existing volume. Servers can access
the volume during this synchronization process.

Volume mirroring can be used to migrate data to or from a Storwize family system. For
example, you can start with a non-mirrored image mode volume in the migration pool, and
then add a copy to that volume in the destination pool on internal storage. After the volume is
synchronized, you can delete the original copy that is in the source pool. During the
synchronization process, the volume remains available.

Volume mirroring is also used to convert fully allocated volumes to use data reduction
technologies, such as thin-provisioning, compression, or deduplication, or to migrate volumes
between storage pools. Volume mirroring is available on the IBM FlashSystem 5100 system
without extra licenses.

Chapter 1. Introduction and system overview 25


FlashCopy
The FlashCopy or snapshot function creates a PiT copy of data that is stored on a source
volume to a target volume. FlashCopy is sometimes described as an instance of a time-zero
(T0) copy. Although the copy operation takes some time to complete, the resulting data on the
target volume is presented so that the copy appears to have occurred immediately, and all
data is available immediately. Advanced functions of FlashCopy allow operations to occur on
multiple source and target volumes.

Management operations are coordinated to provide a common, single point in time for
copying target volumes from their respective source volumes to create a consistent copy of
data that spans multiple volumes.

The function also supports multiple target volumes to be copied from each source volume,
which can be used to create images from different PiTs for each source volume.

FlashCopy is used to create consistent backups of dynamic data and test applications, and to
create copies for auditing purposes and for data mining. It can be used to capture the data at
a particular time to create consistent backups of dynamic data. The resulting image of the
data can be backed up, for example, to a tape device. When the copied data is on tape, the
data on the FlashCopy target disks becomes redundant and can be discarded.

Another possible FlashCopy application is creating test environments. FlashCopy can be


used to test an application with real business data before the existing production version of
the application is updated or replaced. With FlashCopy, a fully functional and space-efficient
clone of a volume containing real data can be created. It enables read and write access for
the test environment while keeping the real production environment data both safe and
untouched. After testing is complete, the clone volume can be discarded or retained for future
use.

FlashCopy can perform a restore from any existing FlashCopy mapping. Therefore, you can
restore (or copy) from the target to the source of your regular FlashCopy relationships. When
restoring data from FlashCopy, this method can be qualified as reversing the direction of the
FlashCopy mappings. This approach can be used for various applications, such as recovering
a production database application after an errant batch process that caused extensive
damage.

The IBM FlashSystem 5100 system requires a license to use the FlashCopy function. The
license is not policed by the system and does not require any input. For auditing purposes,
retain the license agreement for proof of compliance.

Remote mirroring
You can use remote mirroring, or RC functions, to set up a relationship between two volumes,
where updates made to one volume are mirrored on the other volume. The volumes can be
on two different systems (intersystem). In addition, the Storwize family systems support
volumes that are on the same system (intrasystem).

Although data is only written to a single volume, the system maintains two copies of the data.
If the copies are separated by a significant distance, the RC can be used as a backup for DR.

For an RC relationship, one volume is designated as the primary and the other volume is
designated as the secondary. Host applications write data to the primary volume, and
updates to the primary volume are copied to the secondary volume. Normally, host
applications do not run I/O operations to the secondary volume.

26 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The following types of remote mirroring are available:
򐂰 Metro Mirror (MM)
Provides a consistent copy of a source volume on a target volume. Data is written to the
target volume synchronously after it is written to the source volume so that the copy is
continuously updated.
With synchronous copies, host applications write to the primary volume but do not receive
a confirmation that the write operation completed until the data is written to the secondary
volume, which ensures that both volumes have identical data when the copy operation
completes. After the initial copy operation completes, the MM function maintains a fully
synchronized copy of the source data at the target site always. The MM function supports
copy operations between volumes that are separated by distances up to 300 km.
For DR purposes, MM provides the simplest way to maintain an identical copy on both the
primary and secondary volumes. However, as with all synchronous copies over remote
distances, there can be a performance impact to host applications. This performance
impact is related to the distance between primary and secondary volumes and, depending
on application requirements, its use might be limited based on the distance between sites.
򐂰 Global Mirror (GM)
Provides a consistent copy of a source volume on a target volume. The data is written to
the target volume asynchronously and the copy is continuously updated. When a host
writes to the primary volume, a confirmation of I/O completion is received before the write
operation completes for the copy on the secondary volume. Due to this situation, the copy
might not contain the most recent updates when a DR operation is completed.
If a failover operation is initiated, the application must recover and apply any updates that
were not committed to the secondary volume. If I/O operations on the primary volume are
paused for a small length of time, the secondary volume can become an exact match of
the primary volume. This function is comparable to a continuous backup process in which
the last few updates are always missing. When you use GM for DR, you must consider
how you want to handle these missing updates.
The secondary volume is generally less than 1 second behind the primary volume, which
minimizes the amount of data that must be recovered if a failover occurs. However, a
high-bandwidth link must be provisioned between the two sites.
򐂰 Global Mirror with Change Volumes (GMCV)
Enables support for GM with higher RPO by using change volumes. This function is for
use in environments where the available bandwidth between the sites is smaller than the
update rate of the replicated workload.
With GMCV, or GM with cycling, change volumes must be configured for both the primary
and secondary volumes in each relationship. A copy is taken of the primary volume in the
relationship to the change volume. The background copy process reads data from the
stable and consistent change volume, copying the data to the secondary volume in the
relationship.
Copy on Write (CoW) technology is used to maintain the consistent image of the primary
volume for the background copy process to read. The changes that took place while the
background copy process was active are also tracked. The change volume for the
secondary volume can also be used to maintain a consistent image of the secondary
volume while the background copy process is active.
GMCV provides fewer requirements to inter-site link bandwidth than other RC types, and it
is mostly used when link parameters are not sufficient to maintain RC relationship without
impacting host performance.

Chapter 1. Introduction and system overview 27


Intersystem replication is possible over an FC or IP link. The native IP replication feature
enables replication between any SAN Volume Controller / Storwize family systems by using
the built-in networking ports of the system nodes.

Note: All three types of RC are supported to work over an IP link, but the recommended
type is GMCV.

A license is required for the IBM FlashSystem 5100 system to participate in an RC


relationship of any type. For Storwize 5030H systems that contain two I/O groups, two
licenses are needed.

1.7.1 HyperSwap
The HyperSwap function is a HA feature that provides dual-site, active-active access to a
volume. HyperSwap functions are available on systems that can support more than one I/O
group, such as the IBM FlashSystem 5100 and Storwize 5030H systems.

With HyperSwap, a fully independent copy of the data is maintained at each site. When data
is written by hosts at either site, both copies are synchronously updated before the write
operation is completed. The HyperSwap function automatically optimizes itself to minimize
data that is transmitted between two sites, and to minimize host read and write latency.

If the nodes go offline or the storage at either site goes offline and an online and accessible
up-to-date copy is left, the HyperSwap function can automatically fail over access to the
online copy. The HyperSwap function also automatically resynchronizes the two copies when
possible.

The HyperSwap solution requires one IBM FlashSystem 5100 Control Enclosure at each site,
and it requires a third site that acts as a tie-breaking quorum device. The third site can be
implemented as FC-attached storage or IP-linked quorum application.

To construct HyperSwap volumes, active-active replication relationships are made between


the copies at each site. These relationships automatically run and switch direction according
to which copy or copies are online and up to date. The relationships provide access to
whichever copy is up to date through a single volume, which has a unique ID. This volume is
visible as a single object across both sites (I/O groups), and is mounted to a host system.

The HyperSwap function works with the standard multipathing drivers that are available on a
wide variety of host types, with no additional host support that is required to access the highly
available (HA) volume. Where multipathing drivers support Asymmetric Logical Unit Access
(ALUA), the storage system tells the multipathing driver which nodes are closest to it and
should be used to minimize I/O latency. You just need to tell the storage system which site a
host is connected to, and it configures host pathing optimally.

Because remote mirroring is used to support the HyperSwap capability, remote mirroring
licensing is a requirement for this feature on the IBM FlashSystem 5100 and Storwize 5030H
systems. Depending on the number of HyperSwap volumes, a FlashCopy upgrade license
might also be required on the Storwize 5030H system.

28 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.7.2 Features for application integration
The IBM FlashSystem 5100 system includes the following features, which enable tight
integration with VMware:
򐂰 vCenter plug-in: Enables monitoring and self-service provisioning of the system from
within VMware vCenter.
򐂰 vStorage APIs for Array Integration (VAAI) support: This function supports
hardware-accelerated virtual machine (VM) copy / migration and hardware-accelerated
VM initiation, and accelerates VMware Virtual Machine File System (VMFS).
򐂰 Microsoft Windows System Resource Manager (SRM) for VMware Site Recovery
Manager: Supports automated storage and host failover, failover testing, and failback.
򐂰 VVOL integration for better usability: The migration of space-efficient volumes between
storage containers maintains the space efficiency of volumes. Cloning a VM achieves a
full independent set of VVOLs, and resiliency is improved for VMs if volumes start running
out of space.

VMware vSphere Virtual Volumes


The system supports VVOLs, which allow VMware vCenter to automate the management of
system objects like volumes and pools. VVOLs is an integration and management framework
that virtualizes the Storwize family systems, enabling a more efficient operational model that
is optimized for virtualized environments and centered on the application instead of the
infrastructure.

VVOLs simplify operations through policy-driven automation that enables more agile storage
consumption for VMs and dynamic adjustments in real time when they are needed. It
simplifies the delivery of storage service levels to individual applications by providing finer
control of hardware resources and native array-based data services that can be instantiated
with VM granularity.

With VVOLs, VMware offers a paradigm in which an individual VM and its disks, rather than a
LUN, becomes a unit of storage management for a storage system. VVOLs encapsulate
virtual disks (VDisks) and other VM files, and natively store the files on the storage system.

By using a special set of APIs called vSphere APIs for Storage Awareness (VASA), the
storage system becomes aware of the VVOLs and their associations with the relevant VMs.
Through VASA, vSphere and the underlying storage system establish a two-way out-of-band
communication to perform data services and offload certain VM operations to the storage
system. For example, some operations, such as snapshots and clones, can be offloaded.

For more information about VVOLs and the actions that re required to implement this feature
on the host side, see the VMware website.

IBM support for VASA is provided by IBM Spectrum Connect (before Version 3.4.0, IBM
Spectrum Control Base Edition (SCB). The Storwize family system administrator can assign
ownership of VVOLs to IBM Spectrum Connect by creating a user with the VASA Provider
security role. IBM Spectrum Connect provides communication between the VMware vSphere
infrastructure and the Storwize family system.

Although the system administrator can complete certain actions on volumes and pools that
are owned by the VASA Provider security role, IBM Spectrum Connect retains management
responsibility for VVOLs. For more information about IBM Spectrum Connect, see the IBM
Spectrum Connect documentation.

Chapter 1. Introduction and system overview 29


Note: At the time of writing, VVOLs are not supported on DRPs because they use child
pool function that is available with standard pools only.

1.8 Features for manageability


IBM FlashSystem 5100 systems offer the following manageability and serviceability features:
򐂰 An intuitive GUI
򐂰 IBM Call Home
򐂰 IBM Storage Insights

The Storwize family system GUI


IBM FlashSystem 5100 systems include an easy-to-use management GUI that runs on one of
the node canisters in the control enclosure to help you monitor, manage, and configure your
system. You can access the GUI by opening any supported web browser and entering the
management IP addresses.

IBM FlashSystem 5100 systems use a GUI with the same look and feel as other IBM Storwize
family solutions for a consistent management experience across all platforms. The GUI has
an improved overview dashboard that provides all information in an easy-to-understand
format and enables visualization of effective capacity. With the GUI, you can quickly deploy
storage and manage it efficiently.

Figure 1-7 shows the IBM FlashSystem 5100 dashboard view. This view is the default that is
displayed after the user logs on to the system.

Figure 1-7 IBM FlashSystem 5100 GUI dashboard

The IBM Storwize family systems also provide a CLI, which is useful for advanced
configuration and scripting.

The Storwize family systems support SNMP, email notifications that use Simple Mail Transfer
Protocol (SMTP), and syslog redirection for complete enterprise management access.

30 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
IBM Call Home
Call Home connects the system to IBM Service Personnel who can monitor and respond to
system events to ensure that your system remains running. The Call Home function opens a
service alert if a serious error occurs in the system, automatically sending the details of the
error and contact information to IBM Service Personnel.

If the system is entitled for support, a Problem Management Record (PMR) is automatically
created and assigned to the appropriate IBM Support team. The information that is provided
to IBM is an excerpt from the event log containing the details of the error, and client contact
information from the system. IBM Service Personnel contact the client and arrange service on
the system, which can greatly improve the speed of resolution by removing the need for the
client to detect the error and raise a support call themselves.

The system supports two methods to transmit notifications to the support center:
򐂰 Call Home with cloud services
Call Home with cloud services sends notifications directly to a centralized file repository
that contains troubleshooting information that is gathered from customers. Support
personnel can access this repository and automatically be assigned issues as problem
reports.
This method of transmitting notifications from the system to support removes the need for
customers to create problem reports manually. Call Home with cloud services also
eliminates email filters dropping notifications to and from support, which can delay
resolution of problems on the system.
This method sends notifications only to the predefined support center.
򐂰 Call Home with email notifications
Call Home with email notification sends notifications through a local email server to
support and local users or services that monitor activity on the system. With email
notifications, you can send notifications to support and designate internal distribution of
notifications, which alerts internal personnel about potential problems. Call Home with
email notifications requires configuring at least one email server, and local users.
However, external notifications to the support center can be dropped if filters on the email
server are active. To eliminate this problem, Call Home with email notifications is not
recommended as the only method to transmit notifications to the support center. Call
Home with email notifications can be configured together with cloud services.

IBM highly encourages all clients to take advantage of the Call Home feature so that you and
IBM can collaborate for your success.

IBM Storage Insights


IBM Storage Insights is an IBM Cloud® software as a service (SaaS) offering that can help
you monitor and optimize the storage resources in the system and across your data center.
IBM Storage Insights monitors your storage environment and provides information about the
statuses of multiple systems in a single dashboard. You can view data from the perspectives
of the servers, applications, and file systems. Two versions of IBM Storage Insights are
available: IBM Storage Insights and IBM Storage Insights Pro.

When you order the IBM FlashSystem 5100 system, IBM Storage Insights is available for
free. With this version, you can monitor the basic health, status, and performance of various
storage resources.

Chapter 1. Introduction and system overview 31


IBM Storage Insights Pro is a subscription-based product that provides a more
comprehensive view of the performance, capacity, and health of your storage resources. In
addition to the features that are offered by IBM Storage Insights, IBM Storage Insights Pro
provides tools for intelligent capacity planning, storage reclamation, storage tiering, and
performance troubleshooting services. Together, these features can help you reduce storage
costs and optimize your data center.

IBM Storage Insights is a part of the monitoring and helps to ensure continued availability of
the IBM FlashSystem 5100 systems.

Cloud-based IBM Storage Insights provides a single dashboard that gives you a clear view of
all of your IBM block storage. You can make better decisions by seeing trends in performance
and capacity. With storage health information, you can focus on areas that need attention.
When IBM Support is needed, IBM Storage Insights simplifies uploading logs, speeds
resolution with online configuration data, and provides an overview of open tickets, all in one
place.

The following features are some of the ones that are available with IBM Storage Insights:
򐂰 A unified view of IBM systems:
– Provides a single view to see all your system’s characteristics.
– See all of your IBM storage inventory.
– Provides a live event feed so that you know in real time what is going on with your
storage so that you can act fast.
򐂰 IBM Storage Insights collects telemetry data and Call Home data and provides real-time
system reporting of capacity and performance.
򐂰 Overall storage monitoring by looking at the following information:
– The overall health of the system.
– Monitoring of the configuration to see whether it meets best practices.
– System resource management determines which system is overtaxed, and provides
proactive recommendations to fix it.
򐂰 IBM Storage Insights provides advanced customer service with an event filter that you can
use to accomplish the following tasks:
– You and IBM Support can view support tickets, open and close them, and track trends.
– You can use the autolog collection capability to collect the logs and send them to IBM
before IBM Support looks in to the problem. This capability can save as much as 50%
of the time to resolve the case.

32 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-8 shows a view of the IBM Storage Insights dashboard.

Figure 1-8 IBM Storage Insights dashboard

In order for IBM Storage Insights to operate, a lightweight data collector is installed in your
data center to stream performance, capacity, asset, and configuration metadata to your
IBM Cloud instance. The metadata flows in one direction: from your data center to IBM Cloud
over HTTPS. Only metadata is collected. The actual application data that is stored on the
storage systems cannot be accessed by the data collector. In IBM Cloud, your metadata is
AES256-encrypted and protected by physical, organizational, access, and security controls.

IBM Storage Insights is ISO/IEC 27001 Information Security Management certified.

For more information about IBM Storage Insights, see the following websites:
򐂰 IBM Storage Insights Fact Sheet
򐂰 Functional demonstration environment
򐂰 IBM Storage Insights security information
򐂰 IBM Storage Insights registration

Chapter 1. Introduction and system overview 33


1.9 IBM FlashSystem 7200 overview
This section describes the IBM FlashSystem 7200 architectural components, available
models, enclosure, software features, and licensing options.

1.9.1 IBM FlashSystem 7200 hardware components


Each IBM FlashSystem 7200 system consists of a control enclosure and FCM drives. The
control enclosure is the storage server that runs the IBM Spectrum Virtualize Software that
controls and provides features to store and manage data on the FCM or industry-standard
NVMe drives.

Figure 1-9 shows the front and rear views of the IBM FlashSystem 7200 system.

Figure 1-9 IBM FlashSystem 7200 front and rear views

Here are the core IBM FlashSystem 7200 components:


򐂰 IBM FlashSystem 7200 Control Enclosure:
– Power supply units (PSUs)
– Battery modules
– Fan modules
– Interface cards
– Cascade Lake CPUs and memory slots
򐂰 IBM FlashSystem FCM drives
򐂰 IBM FlashSystem 7000 Expansion Enclosures (SAS-attached)

Note: The IBM FlashSystem 7200 is also available with the optional purchase of the
Enterprise Class Support (ECS), which gives enhanced customer service response
times, the services on an IBM Technical Advisor, and IBM applied code that is purged
through the Remote Code Load process.

34 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.9.2 IBM FlashSystem 7200 Control Enclosure
IBM FlashSystem 7200 is a 2U model that can support up to 24 FCM drives (IBM built NVMe
drives) with hardware compression and encryption or industry-standard NVMe drives of
various capacities. IBM FlashSystem 7200 can be configured with up to 1.5 TB of cache.

For more information about the drive types that are supported, see 1.13, “IBM FlashCore
Module drives, NVMe SSDs, and SCM drives” on page 67.

A IBM FlashSystem 7200 clustered system can contain up to four IBM FlashSystem 7200
systems and up to 3,040 drives. IBM FlashSystem 7200 systems can be added into existing
clustered systems that include Storwize V7000 systems.

Mixed cluster naming


You must understand the behavior in mixed clusters to ensure that the system agrees on the
name of the entire system. This section is an extension of the work that was already done in
Version 8.2.0 for licensing to allow IBM FlashSystem 9100 and 9200 products to cluster with
the V7000 system. The new extended rule is that the newest / highest system type overrules
anything else in the cluster (so if you add a IBM FlashSystem 7200 system to a IBM
FlashSystem 9100 or Storwize V7000 system, the system reports itself as an IBM
FlashSystem 9200 system).

Here is the explicit order of priority:

IBM FlashSystem 9200 > IBM FlashSystem 9100 > IBM FlashSystem 7200 > Storwize 7000

Here are some examples of the order of priority:


򐂰 In an existing Storwize V7000 cluster, you add a IBM FlashSystem 7200 I/O group. The
cluster is now an IBM FlashSystem 7200 system.
򐂰 If you then add a IBM FlashSystem 9100 system, the cluster is a IBM FlashSystem 9100
system.
򐂰 If you then add an IBM FlashSystem 9200 system, the cluster is an IBM FlashSystem
9200.

Figure 1-10 shows the IBM FlashSystem 7200 internal architecture.

Figure 1-10 IBM FlashSystem 7200 internal architecture

Chapter 1. Introduction and system overview 35


Figure 1-11 shows the IBM FlashSystem 7200 enclosure rear view.

Figure 1-11 IBM FlashSystem 7200 enclosure rear view

As shown in Figure 1-11, the IBM FlashSystem 7200 enclosure consists of redundant PSUs,
node canisters, and fan modules to provide redundancy and HA.

Figure 1-12 shows a picture of internal hardware components of a node canister. To the left of
the picture is the front of the canister where fan modules and battery backup are, followed by
two Cascade Lake CPUs and Dual Inline Memory Module (DIMM) slots and PCIe risers for
adapters on the right.

Figure 1-12 Internal hardware components

1.9.3 IBM FlashSystem 7200 Model 824 system


The Model 824 system offers the following physical features:
򐂰 Two node canisters with four x8 cores 2.1 GHz Cascade Lake CPUs with compression
assist up to 40 gigabits per second (Gbps)
򐂰 Cache options from 128 GB (64 GB per canister) to 1.5 TB (768 GB per canister)
򐂰 Eight 10 Gb Ethernet on board ports standard for iSCSI connectivity
򐂰 Up to three PCIe adapters (see options below)

36 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Twenty-four slots for 2.5-inch NVMe flash drives
򐂰 2U 19-inch rack mount enclosure with AC power supplies
򐂰 One boot drive
򐂰 The PCIe adapter options are:
– Four-port 16 Gb FC/FC-NVMe card for 16 Gb FC connectivity
– Four-port 32 Gb FC/FC-NVMe card for 32 Gb FC connectivity
– Two-port 25 Gb iSCI/RoCE/NVMe over Ethernet
– Two-port 25 Gb iSCSI/iWARP/NVMe over Ethernet
– Two-port 50 /100 GbE iSCSI/RoCE/NVMe over Ethernet (not at GA)
– Two-port 50 /100 GbE iSCSI/iWARP/NVMe over Ethernet (not at GA)
– 12 Gb SAS ports for expansion enclosure attachment

1.9.4 IBM FlashSystem 7200 Expansion Enclosures 12G and 24G


The following types of expansion enclosures are available:
򐂰 IBM FlashSystem 7200 large form factor (LFF) Expansion Enclosure Model 12G
򐂰 IBM FlashSystem 7200 SFF Expansion Enclosure Model 24G

The IBM FlashSystem 7200 LFF 12G Expansion Enclosure includes the following
components:
򐂰 Two expansion canisters
򐂰 12 Gb SAS ports for control enclosure and expansion enclosure attachment
򐂰 A total of 12 slots for 3.5-inch SAS drives
򐂰 2U 19-inch rack-mounted enclosure with AC power supplies

Figure 1-13 shows a Model 12G front view.

Figure 1-13 IBM FlashSystem 7200 LFF Expansion Enclosure Model 12G

IBM FlashSystem 7200 SFF Expansion Enclosure Model 24G includes the following
components:
򐂰 Two expansion canisters
򐂰 12 Gb SAS ports for control enclosure and expansion enclosure attachment
򐂰 A total of 24 slots for 2.5-inch SAS drives
򐂰 2U 19-inch rack mount enclosure with AC power supplies

The SFF Expansion Enclosure is a 2U enclosure that includes the following components:
򐂰 A total of twenty-four 2.5-inch drives (HDDs or SSDs).
򐂰 Two Storage Bridge Bay (SBB)-compliant Enclosure Services Manager (ESM) canisters.
򐂰 Two fan assemblies, which mount between the drive midplane and the node canisters.
Each fan module is removable when the node canister is removed.

Chapter 1. Introduction and system overview 37


򐂰 Two power supplies.
򐂰 An RS232 port on the back panel (3.5 mm stereo jack), which is used for configuration
during manufacturing.

Figure 1-14 shows the front of an SFF Expansion Enclosure.

Figure 1-14 Front of an IBM FlashSystem 7200 SFF Expansion Enclosure

Figure 1-15 shows the rear view of the expansion enclosure.

Figure 1-15 Rear of an IBM FlashSystem 7200 Expansion Enclosure

1.9.5 Dense Expansion Enclosure 92G


Dense expansion drawers, or dense drawers, are optional disk expansion enclosures that are
5U rack-mounted. Each chassis features two expansion canisters, two power supplies, two
expander modules, and a total of four fan modules.

Each dense drawer can hold up 92 drives that are positioned in four rows of 14 and another
three rows of 12 mounted drives assemblies. Two Secondary Expander Modules (SEMs) are
centrally located in the chassis. One SEM addresses 54 drive ports, and the other addresses
38 drive ports.

The drive slots are numbered 1 - 14, starting from the left rear slot and working from left to
right, back to front.

Each canister in the dense drawer chassis features two SAS ports numbered 1 and 2. The
use of SAS port1 is mandatory because the expansion enclosure must be attached to an IBM
FlashSystem 7200 node or another expansion enclosure. SAS connector 2 is optional
because it is used to attach to more expansion enclosures.

Each IBM FlashSystem 7200 system can support up to four dense drawers per SAS chain.

38 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-16 shows a dense expansion drawer.

Figure 1-16 IBM Dense Expansion Drawer

SAS chain limitations


When attaching expansion enclosures to the control enclosure, you are not limited by the type
of the enclosure (if it meets all generation level restrictions). The only limitation for each SAS
chain is its chain weight. Each type of enclosure defines its own chain weight:
򐂰 Enclosures 12G and 24G have a chain weight of 1.
򐂰 Enclosure 92G has a chain weight of 2.5.

The maximum chain weight is 10.

For example, you can combine seven 24G and one 92G expansions (7x1 + 1x2.5 = 9.5 chain
weight), or two 92G enclosures, one 12G, and four 24G (2x2.5 + 1x1 + 4x1 = 10 chain
weight).

Chapter 1. Introduction and system overview 39


An example of chain weight 4.5 with one 24G, one 12G, and one 92G enclosures, all correctly
cabled, is shown in Figure 1-17.

Figure 1-17 Connecting SAS cables while complying with the maximum chain weight

1.9.6 IBM FlashSystem 7200 Utility Models U7C


IBM FlashSystem 7200 Utility Models U7C provides a variable capacity storage offering.
These models offer a fixed capacity, with a base subscription of 35% of the total capacity.

IBM Storage Insights (free edition or Pro) is used to monitor system usage, and the capacity
that is used beyond the base 35% is billed on a per-month, per-terabyte basis so that you can
grow or shrink usage and pay only for the configured capacity.

IBM FlashSystem Utility Models are provided for customers who can benefit from a variable
capacity system, where billing is based only on actual provisioned space. The hardware is
leased through IBM Global Finance on a three-year lease, which entitles the customer to use
up to 35% of the total system capacity at no additional cost. If storage needs increase beyond
that 35% capacity, usage is billed based on the average daily provisioned capacity per
terabyte, per month on a quarterly basis.

Example: Total system capacity of 115 TB


A customer has an IBM FlashSystem 7200 Utility Model with 4.8 TB NVMe drives for a total
system capacity of 115 TB. The base subscription for such a system is 40.25 TB. During the
months where the average daily usage is below 40.25 TB, there is no additional billing.

The system monitors daily provisioned capacity and averages those daily usage rates over
the month term. The result is the average daily usage for the month.

40 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If a customer uses 45 TB, 42.5 TB, and 50 TB in three consecutive months, IBM Storage
Insights calculates the overage as shown in Table 1-7, rounding to the nearest terabyte.

Table 1-7 Billing calculations based on customer usage


Average daily Base Overage To be billed

45 TB 40.25 TB 4.75 TB 5 TB

42.5 TB 40.25 TB 2.25 TB 2 TB

50 TB 40.25 TB 9.75 TB 10 TB

The total capacity that is billed at the end of the quarter is 17 TB per month in this example.

Flash drive expansions may be ordered with the system in all supported configurations.

Table 1-8 shows the feature codes that are associated with the IBM FlashSystem 7200 U7C
Utility Model billing.

Table 1-8 IBM FlashSystem 7200 Utility Models U7C billing feature codes

Feature code Description

# AE00 Variable Usage 1 TB per month

# AE01 Variable Usage 10 TB per month

# AE02 Variable Usage 100 TB per month

These features are used to purchase the variable capacity that is used in the IBM FlashSytem
7200 Utility Models. The features (feature codes #AE00, #AE01, and #AE02) provide
terabytes of capacity beyond the base subscription on the system. Usage is based on the
average capacity that is used per month. The total of the prior three months’ usage should be
totaled, and the corresponding number of #AE00, #AE01, and #AE02 features ordered
quarterly.

1.10 IBM FlashSystem 9100 overview


This section describes the IBM FlashSystem 9100 architectural components, available
models, enclosure, software features, and licensing options.

1.10.1 IBM FlashSystem 9100 hardware components


Each IBM FlashSystem 9100 system consists of a control enclosure and FCM drives. The
control enclosure is the storage server that runs the IBM Spectrum Virtualize Software that
controls and provides features to store and manage data on the FCM or industry-standard
NVMe drives.

Chapter 1. Introduction and system overview 41


Figure 1-18 shows the front and rear views of the IBM FlashSystem 9100 system.

Figure 1-18 Front and rear views of the IBM FlashSystem 9100 system

Here are the core IBM FlashSystem 9100 components:


򐂰 IBM FlashSystem 9100 Control Enclosure
򐂰 PSUs
򐂰 Battery modules
򐂰 Fan modules
򐂰 Interface cards
򐂰 Skylake CPUs and memory slots
򐂰 IBM FlashSystem FCM drives
򐂰 IBM FlashSystem 9100 Expansion Enclosures (SAS-attached)

1.10.2 IBM FlashSystem 9100 Control Enclosure


The IBM FlashSystem 9100 Control Enclosure is a 2U model that can support up to 24 FCM
drives (IBM built NVMe drives) with hardware compression and encryption or
industry-standard NVMe drives of various capacities. It can also support up to four SCM class
drives (needs IBM Spectrum Virtualize Software V8.3.1. or later to support the SCM drives).

The IBM FlashSystem 9100 system can be configured with up to 1.5 TB of cache.

For more information about the drive types that are supported, see 1.13, “IBM FlashCore
Module drives, NVMe SSDs, and SCM drives” on page 67.

42 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-19 shows the IBM FlashSystem 9100 internal architecture.

Figure 1-19 IBM FlashSystem 9100 internal architecture

An IBM FlashSystem 9100 clustered system can contain up to four IBM FlashSystem 9100
systems and up to 3,040 drives. IBM FlashSystem 9100 systems can be added into existing
clustered systems that include Storwize V7000 systems.

Figure 1-20 shows the enclosure rear view.

Figure 1-20 Enclosure rear view

As shown in Figure 1-20, the IBM FlashSystem 9100 enclosure consists of redundant PSUs,
node canisters, and fan modules to provide redundancy and HA.

Chapter 1. Introduction and system overview 43


Figure 1-21 shows a picture of the internal hardware components of a node canister. To the
left of the picture is the front of the canister where fan modules and battery backup are
located, followed by two Skylake CPUs and DIMM slots and PCIe risers for adapters on the
right.

Figure 1-21 Internal hardware components

1.10.3 IBM FlashSystem 9100 Model 9110 Control Enclosure AF7


IBM FlashSystem 9100 Model 9110 Control Enclosure AF7 has the following features:
򐂰 Two node canisters with four x8 cores 1.7 GHz Skylake CPUs with compression assist up
to 40 Gbps
򐂰 Cache options from 128 GB (64 GB per canister) to 1.5 TB (768 GB per canister)
򐂰 Eight 10 Gb Ethernet ports standard for iSCSI connectivity
򐂰 16 Gb FC, 25 Gb Ethernet, and 10 Gb Ethernet ports for FC and iSCSI connectivity
򐂰 12 Gb SAS ports for expansion enclosure attachment
򐂰 Twenty-four slots for 2.5-inch NVMe flash drives
򐂰 2U 19-inch rack-mounted enclosure with AC power supplies
򐂰 One boot drive

1.10.4 IBM FlashSystem 9100 Model 9150 Control Enclosure AF8


IBM FlashSystem 9100 Model 9150 Control Enclosure AF8 has the following features:
򐂰 Four x14 core 2.2 GHz Skylake CPUs with dual boot drives. Other hardware features and
software functions are common to both models of the IBM FlashSystem 9100 system.
򐂰 Two node canisters, each with four x14 cores 2.2 GHz Skylake CPUs with compression
assist up to 100 Gbps.
򐂰 Cache options from 128 GB (64 GB per canister) to 1.5 TB (768 GB per canister).
򐂰 Eight 10 Gb Ethernet ports standard for iSCSI connectivity.
򐂰 16 Gb FC, 25 Gb Ethernet, and 10 Gb Ethernet ports for FC and iSCSI connectivity.
򐂰 12 Gb SAS ports for expansion enclosure attachment.

44 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Twenty-four slots for 2.5-inch NVMe flash drives.
򐂰 2U 19-inch rack mount enclosure with AC power supplies.
򐂰 Two boot drives.

1.10.5 IBM FlashSystem 9100 Model 9150 Expansion Enclosure Models


AFF/A9F
All IBM FlashSystem 9100 Models AFF and A9F can be attached to an IBM FlashSystem
9100 Control Enclosure by using the SAS adapter.

IBM FlashSystem 9100 Model AFF


IBM FlashSystem 9100 Model AFF holds up to twenty-four 2.5-inch SAS flash drives in a 2U
19-inch rack-mounted enclosure. An intermixture of capacity drives is allowed in any drive
slot, and up to 20 AFF enclosures can be attached to the control enclosure (490) drives.

IBM FlashSystem 9100 Model A9F


IBM FlashSystem 9100 Model A9F holds up to ninety-two 3.5-inch SAS flash drives in a 5U
19-inch rack-mounted enclosure. An intermixture of capacity drives is allowed in any drive
slot, and up to eight A9F enclosures can be attached to the control enclosure (736) drives.

1.10.6 IBM FlashSystem 9100 Utility Models UF7 and UF8


IBM FlashSystem9100 Utility Models UF7 and UF8 provide a variable capacity storage
offering. These models offer a fixed capacity, with a base subscription of 35% of the total
capacity.

IBM Storage Insights (free edition or pro) is used to monitor system usage, and capacity that
is used beyond the base 35% is billed on a per-month, per-terabyte basis. You can grow or
shrink usage and pay only for the configured capacity.

IBM FlashSystem Utility Models are provided for customers who can benefit from a variable
capacity system where billing is based only on actual provisioned space. The hardware is
leased through IBM Global Finance on a three-year lease, which entitles the customer to use
up to 35% of the total system capacity at no additional cost. If storage needs increase beyond
that 35% capacity, usage is billed based on the average daily provisioned capacity per
terabyte, per month, on a quarterly basis.

Example: Total system capacity of 115 TB


A customer has an IBM FlashSystem 9100 Utility Model with 4.8 TB NVMe drives for a total
system capacity of 115 TB. The base subscription for such a system is 40.25 TB. During the
months where the average daily usage is below 40.25 TB, there is no additional billing.

The system monitors daily provisioned capacity and averages those daily usage rates over
the month. The result is the average daily usage for the month.

Chapter 1. Introduction and system overview 45


If a customer uses 45 TB, 42.5 TB, and 50 TB in three consecutive months, IBM Storage
Insights calculates the overage as shown in Table 1-9 by rounding to the nearest terabyte.

Table 1-9 Billing calculations based on customer usage


Average daily Base Overage To be billed

45 TB 40.25 TB 4.75 TB 5 TB

42.5 TB 40.25 TB 2.25 TB 2 TB

50 TB 40.25 TB 9.75 TB 10 TB

The total capacity that is billed at the end of the quarter is 17 TB per month in this example.
Flash drive expansions may be ordered with the system in all supported configurations.

Table 1-10 shows the feature codes that are associated with the IBM FlashSystem 9100
Utility Model UF7 and UF8 billing.

Table 1-10 IBM FlashSystem 9100 Utility Model UF7 and UF8 billing feature codes

Feature code Description

#AE00 Variable Usage 1 TB per month

#AE01 Variable Usage 10 TB per month

#AE02 Variable Usage 100 TB per month

These features are used to purchase the variable capacity that is used in the IBM
FlashSystem 9100 Utility Models. The features (#AE00, #AE01, and #AE02) provide
terabytes of capacity beyond the base subscription on the system. Usage is based on the
average capacity that is used per month. The total of the prior three months’ usage should be
totaled, and the corresponding number of #AE00, #AE01, and #AE02 features ordered
quarterly.

1.11 IBM FlashSystem 9200 overview


This section describes the IBM FlashSystem 9200 architectural components, available
models, enclosure, software features, and licensing options.

1.11.1 IBM FlashSystem 9200 hardware components


Each IBM FlashSystem 9200 system consists of a control enclosure and FCM drives. The
control enclosure is the storage server that runs the IBM Spectrum Virtualize Software that
controls and provides features to store and manage data on the FCM or industry-standard
NVMe drives. Figure 1-22 on page 47 shows the IBM FlashSystem 9200 front and rear views.

46 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-22 IBM FlashSystem 9200 front and rear views

Here are the core IBM FlashSystem 9200 components:


򐂰 IBM FlashSystem 9200 Control Enclosure:
– PSUs
– Battery modules
– Fan modules
– Interface cards
– Cascade Lake CPUs and memory slots
򐂰 IBM FlashSystem FCM drives
򐂰 IBM FlashSystem 9000 Expansion Enclosures (SAS-attached)

1.11.2 IBM FlashSystem 9200 Control Enclosure


The IBM FlashSystem 9200 system is a 2U model that can support up to 24 FCM drives (IBM
built NVMe drives) with hardware compression and encryption or industry-standard NVMe
drives of various capacities. An IBM FlashSystem 9200 system can be configured with up to
1.5 TB of cache.

A IBM FlashSystem 9200 clustered system can contain up to four IBM FlashSystem 9200
systems and up to 3,040 drives. IBM FlashSystem 9200 systems can be added to existing
clustered systems that include Storwize V7000 systems.

Mixed cluster naming


You need clear rules abut the behavior in mixed clusters to ensure that the system agrees on
the name of the entire system. This topic is an extension of the work that was done in Version
8.2.0 for licensing to allow IBM FlashSystem 9100 products to cluster with Storwize V7000
systems. The new extended rule is that the new or highest system type overrules anything
else in the cluster (for example, if you add a IBM FlashSystem 9200 system to a IBM
FlashSystem 9100 or Storwize V7000 system, the system reports itself as an IBM
FlashSystem 9200 system).

Here is the explicit order of priority:

IBM FlashSystem 9200 system > IBM FlashSystem 9100 system > IBM FlashSystem 7200
system > Storwize 7000 system

Chapter 1. Introduction and system overview 47


Here are some examples:
򐂰 Add a IBM FlashSystem 7200 I/O group to an existing Storwize V7000 cluster, and now
the cluster is an IBM FlashSystem 7200 cluster.
򐂰 If you then add an IBM FlashSytem 9100 system, the cluster is an IBM FlashSystem 9100
cluster.
򐂰 If you then add an IBM FlashSystem 9200 system, the cluster is an IBM FlashSystem
9200 system.

Figure 1-23 shows the IBM FlashSystem 9200 internal architecture.

Figure 1-23 IBM FlashSystem 9200 internal architecture

Figure 1-24 shows the enclosure’s rear view.

Figure 1-24 Rear view of the enclosure

As shown in Figure 1-24, the IBM FlashSystem 9200 enclosure consists of redundant PSUs,
node canisters, and fan modules to provide redundancy and HA.

48 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-25 shows a picture of the internal hardware components of a node canister. At the
left of the picture is the front of the canister where fan modules and battery backup are,
followed by two Cascade Lake CPUs and memory DIMM slots and PCIe risers for the
adapters on the right.

Figure 1-25 Internal hardware components

1.11.3 IBM FlashSystem 9200 Control Enclosure Model AG8


IBM FlashSystem 9200 Control Enclosure Model AG8 has the following components:
򐂰 Two node canisters, each with four x16 cores 2.3 GHz Cascade Lake CPUs with
compression assist up to 100 Gbps
򐂰 Cache options from 128 GB (64 GB per canister) to 1.5 TB (768 GB per canister)
򐂰 Eight 10 Gb Ethernet onboard ports standard for iSCSI connectivity
򐂰 Up to three PCIe adapters (see the options below)
򐂰 Twenty-four slots for 2.5-inch NVMe flash drives
򐂰 2U 19-inch rack mount enclosure with AC power supplies
򐂰 Two boot drives
򐂰 The PCIe adapter options are:
– Four-port 16 Gb FC/FC-NVMe card for 16 Gb FC connectivity
– Four-port 32 Gb FC/FC-NVMe card for 32 Gb FC connectivity
– Two-port 25 Gb iSCI/RoCE/NVMe over Ethernet
– Two-port 25 Gb iSCSI/iWARP/NVMe over Ethernet
– Two-port 50/100 GbE iSCSI/RoCE/NVMe over Ethernet (not at GA)
– Two-port 50/100 GbE iSCSI/iWARP/NVMe over Ethernet (not at GA)
– 12 Gb SAS ports for expansion enclosure attachment

Chapter 1. Introduction and system overview 49


1.11.4 IBM FlashSystem 9200 Utility Model UG8
IBM FlashSystem 9200 Utility Model UG8 provides a variable capacity storage offering.
These models offer a fixed capacity with a base subscription of 35% of the total capacity.

IBM Storage Insights (free edition or pro) is used to monitor system usage, and capacity that
is used beyond the base 35% is billed on a per-month, per-terabyte basis. You can grow or
shrink usage, and pay only for the configured capacity.

IBM FlashSystem Utility Models are provided for customers who can benefit from a variable
capacity system, where billing is based only on actual provisioned space. The hardware is
leased through IBM Global Finance on a three-year lease, which entitles the customer to use
up to 35% of the total system capacity at no additional cost. If storage needs increase beyond
that 35% capacity, usage is billed based on the average daily provisioned capacity per
terabyte, per month, on a quarterly basis.

Example: Total system capacity of 115 TB


A customer has an IBM FlashSystem 9200 Utility Model with 4.8 TB NVMe drives for a total
system capacity of 115 TB. The base subscription for such a system is 40.25 TB. During the
months where the average daily usage is below 40.25 TB, there is no additional billing.

The system monitors daily provisioned capacity and averages those daily usage rates over
the month term. The result is the average daily usage for the month.

If a customer uses 45 TB, 42.5 TB, and 50 TB in three consecutive months, IBM Storage
Insights calculates the overage as shown in Table 1-11, rounding to the nearest terabyte.

Table 1-11 Billing calculations that are based on customer usage


Average daily Base Overage To be billed

45 TB 40.25 TB 4.75 TB 5 TB

42.5 TB 40.25 TB 2.25 TB 2 TB

50 TB 40.25 TB 9.75 TB 10 TB

The total capacity that is billed at the end of the quarter is 17 TB per month in this example.

Flash drive expansions may be ordered with the system in all supported configurations.

Table 1-12 shows the feature codes that are associated with the IBM FlashSystem 9200
Utility Model UG8 billing.

Table 1-12 IBM FlashSystem 9200 Utility Model UG8 billing feature codes

Feature code Description

#AE00 Variable Usage 1 TB per month

#AE01 Variable Usage 10 TB per month

#AE02 Variable Usage 100 TB per month

50 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
These features are used to purchase the variable capacity that is used in the IBM
FlashSystem 9200 Utility Models. The features (#AE00, #AE01, and #AE02) provide
terabytes of capacity beyond the base subscription on the system. Usage is based on the
average capacity that is used per month. The total of the prior three months’ usage should be
totaled and the corresponding number of #AE00, #AE01, and #AE02 features ordered
quarterly.

1.11.5 IBM FlashSystem 9000 Expansion Enclosure Models AFF and A9F
All IBM FlashSystem 9000 Expansion Enclosure Models AFF and A9F can be attached to an
IBM FlashSystem 9200 Control Enclosure by using the SAS adapter.

IBM FlashSystem 9000 Expansion Enclosure Model AFF


IBM FlashSystem 9000 Expansion Enclosure Model AFF holds up to twenty-four 2.5-inch
SAS flash drives in a 2U 19-inch rack mount enclosure. An intermixture of capacity drives is
allowed in any drive slot, and up to 20 AFF enclosures can be attached to the control
enclosure (490) drives.

Figure 1-26 shows the front view of the IBM FlashSystem 9000 Expansion Enclosure Model
AFF.

Figure 1-26 IBM FlashSystem 9000 Expansion Enclosure Model AFF

IBM FlashSystem 9000 Expansion Enclosure Model A9F


IBM FlashSystem 9000 Expansion Enclosure Model A9F holds up to ninety-two 3.5-inch SAS
flash drives in a 5U 19-inch rack mount enclosure. An intermixture of capacity drives is
allowed in any drive slot, and up to eight A9F enclosures can be attached to the control
enclosure (736) drives.

Chapter 1. Introduction and system overview 51


Figure 1-27 shows the front view of the IBM FlashSystem 9000 Expansion Enclosure Model
A9F.

Figure 1-27 IBM FlashSystem 9000 Expansion Enclosure Model front view

The IBM FlashSystem 9000 Expansion Enclosure Models AFF and A9F can attach to either
the IBM FlashSystem 9100 or the IBM FlashSystem 9200 Control Enclosures, but the SAS
chain attachments rules must be observed.

SAS chain limitations


When attaching expansion enclosures to the control enclosure, you are not limited by type of
the enclosure (if it meets all generation level restrictions). The only limitation for each SAS
chain is its chain weight. Each type of enclosure defines its own chain weight:
򐂰 The IBM FlashSystem 9000 Expansion Enclosure Model AFF has a chain weight of 1.
򐂰 The IBM FlashSystem 9000 Expansion Enclosure Model A9F has a chain weight of 2.5.

The maximum chain weight is 10.

For example, you can combine seven IBM FlashSystem 9000 Expansion Enclosure Model
AFF and one IBM FlashSystem 9000 Expansion Enclosure Model A9F expansions (7 x 1 + 1
x 2.5 = 9.5 chain weight) or two IBM FlashSystem 9000 Expansion Enclosure Model A9F
enclosures and five IBM FlashSystem 9000 Expansion Enclosure Model AFF expansions (2 x
2.5 + 5 x 1 = 10 chain weight).

An example of chain weight 4.5 with two IBM FlashSystem 9000 Expansion Enclosure Model
AFF enclosures and one IBM FlashSystem 9000 Expansion Enclosure Model A9F enclosure
all correctly cabled is shown in the following figures.

52 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 1-28 shows the IBM FlashSystem 9100 systems connecting SAS cables while
complying with maximum chain weight.

Figure 1-28 IBM FlashSystem 9100 system connecting SAS cables while complying with maximum
chain weigh

Chapter 1. Introduction and system overview 53


Figure 1-29 shows the IBM FlashSystem 9200 system connecting SAS cables while
complying with maximum chain weight.

Figure 1-29 IBM FlashSystem 9200 system connecting SAS cables while complying with maximum
chain weight

1.12 IBM FlashSystem 9200R Rack Solution overview


IBM is now offering a pre-cabled, pre-configured rack product that is called the IBM
FlashSystem 9200R Rack Solution, which contains multiple IBM FlashSystem 9200 Control
Enclosures and uses IBM Spectrum Virtualize to linearly scale the performance and capacity
through clustering. For more information about this product, see IBM FlashSystem 9200R
Rack Solution Product Guide, REDP-5593.

The IBM FlashSystem 9200R Rack Solution system is available as a pre-cabled,


pre-configured rack with a dedicated FC network for clustering. Available with two, three, or
four clustered IBM FlashSystem 9200 systems and up to four expansion enclosures, it can be
ordered as a IBM FlashSystem 9202R, IBM FlashSystem 9203R, or IBM FlashSystem 9204R
system.

Pre-assembled, installed, and configured by IBM, these racks contain the following
components:
򐂰 Two, three, or four IBM FlashSystem 9200 Control Enclosure Model AG8 clustered
systems, which can be specified by ordering a IBM FlashSystem 9202R, IBM
FlashSystem 9203R, and IBM FlashSystem 9204R, respectively.
򐂰 A pair of either Brocade or Cisco 32 Gb FC switches for a dedicated FC clustered network.
򐂰 An option for up to four IBM FlashSystem 9000 SFF Expansion Enclosures (Model AFF)
or up to two IBM FlashSystem 9000 LFF Expansion Enclosures (Model A9F).

54 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The final configuration occurs on site following the delivery of the systems. More components
can be added to the rack after delivery to meet the growing needs of the business.

Note: Other hardware or software that is used with an IBM FlashSystem 9200R Rack
Solution product is not covered under ECS, including the other machine types and models
(MTMs) within the rack-based IBM FlashSystem 9200R Rack Solution product, which are
covered under their own warranty and maintenance terms and conditions.

Rack rules
The IBM FlashSystem 9200R Rack Solution product represents a limited set of possible
configurations. Each IBM FlashSystem 9200R Rack Solution order must contain these
components:
򐂰 Two, three, or four 9848 Model AG8 Control Enclosures.
򐂰 Two Brocade 8960-F24 or two Cisco 8977-T32 FC switches.
򐂰 Optionally, 0 - 4 9848 Model AFF Expansion Enclosures, with no more than one expansion
enclosure per Model AG8 Control Enclosure and no mixing with the 9848 Model A9F
Expansion Enclosure.
򐂰 Optionally, 0 - 2 9848 Model A9F Expansion Enclosures, with no more than one expansion
enclosure per Model AG8 Control Enclosure and no mixing with 9848 Model A9F
Expansion Enclosure.
򐂰 One 7965-S42 rack with the appropriate power distribution units (PDUs) that are required
to power components within the rack.
򐂰 All components in the rack much include feature codes #FSRS and #4651.
򐂰 For Model AG8, AFF, and A9F Control Enclosures, the first and largest capacity enclosure
includes feature code #AL01, with subsequent enclosures that use #AL02, #AL03, and
#AL04 in capacity order. The 9848 Model AG8 Control Enclosure with #AL01 must also
have #AL0R included.

Following the initial order, each 9848 Model AG8 Control Enclosures can be upgraded
through MES.

More components can be ordered separately and added to the rack within the configuration
limitations of the IBM FlashSystem 9200 system. Clients must ensure that the space, power,
and cooling requirements are met. If assistance is needed with the installation of these
additional components beyond the service that is provided by your IBM System Services
Representative (IBM SSR), IBM Lab Services are available.

Table 1-13 shows the IBM FlashSystem 9200R Rack Solution combinations, the MTMs, and
their associated feature codes.

Table 1-13 IBM FlashSystem 9200R Rack Solution combinations


Machine type and model Description Quantity

7965-S42 IBM Enterprise Slim Rack 1

8960-F24 IBM SAN24B-6 Fibre Channel 2a


switch (Brocade)

8977-T32 IBM SAN32C-6 Fibre Channel 2a


switch (Cisco)

Chapter 1. Introduction and system overview 55


Machine type and model Description Quantity

9848-AFF IBM FlashSystem 9000 2U SFF 0 - 4b


Expansion Enclosure with
3-year Warranty and ECS

9848-AG8 IBM FlashSystem 9200 Control 2, 3, or 4


Enclosure with 3-year Warranty
and ECS

9848-A9F IBM FlashSystem 9000 5U LFF 0 - 2b


high-density Expansion
Enclosure with 3-year Warranty
and ECS
a. For the FC switch, choose either two of machine type (MT) 8977 or two of MT 8960.
b. For extra expansion enclosures, choose either model AFF, model A9F, or none. You cannot
use both.

IBM FlashSystem 9200R Rack Solution configurations: Rack diagrams


This section shows the rack diagrams that show the minimum and maximum IBM
FlashSystem 9200R Rack Solution configurations with both A9F and AFF Expansion
Enclosures.

Key to figures
The key to the symbols that are used in the figured in this section are shown in Table 1-14.

Table 1-14 Key to the symbols that are used in the figures
Label Description

CTLn 9848-AG8 Control Enclosure number n of 4.


CTL1 and CTL2 are required.
CTL3 and CTL4 are optional.

EXPn 9848 Expansion Enclosure number n. Optional.

AFF EXPn 9848-AFF 2U Expansion Enclosure number n of 4.

A9F EXPn 9848-A9F 5U Expansion Enclosure number n of 2.

FC SWn Fibre Channel switch n of 2.


These switches are either both 8977-T32 or they are both
8960-F24.

PDU A, PDU B PDUs. Both have the same rack feature code: #ECJJ, #ECJL,
#ECJN, or 3ECJQ.

Figure 1-30 shows the legend that is used to denote the component placement and
mandatory gaps for the figures that show the configurations.

Mandatory gap Space for optional


component

Figure 1-30 Legend to figures in this section

56 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.12.1 Minimum IBM FlashSystem 9200R Rack Solution configuration in the
rack
Figure 1-31 shows the minimum IBM FlashSystem 9200R Rack Solution configuration in the
rack.

Figure 1-31 Minimum IBM FlashSystem 9200R Rack Solution configuration in the rack

Notes about the minimum configuration


򐂰 Control Enclosures (CTL) 1 and 2 are mandatory:
– In adapter slots 1 and 2 are 32 G or 16 G FC adapters, 25 G Ethernet adapters, or a
mixture of both (for a mixture, insert the FC adapter into slot 1).
– Adapter slot 2 can be blank.
– Adapter slot 3 is either an SAS adapter (it is required if CTLn is attached to EXPn), a
choice of FC or 25 G Ethernet adapters, or blank.
򐂰 The product comes with cables that are appropriate for inter-system FC connectivity. You
must order extra cables for host and Ethernet connectivity.
򐂰 The rack has either (0 - 0.2) A9F Expansion Enclosures or (0 - 4) AFF Expansion
Enclosures.

Chapter 1. Introduction and system overview 57


򐂰 The PDUs and power cabling that are needed depends on what expansion enclosures are
ordered:
– For A9F configurations, a PDU with nine C19 outlets is required. This PDU also has
three C13 outlets on the forward-facing side.
– For other configurations, the PDU with 12 C13 outlets is selected by default.
򐂰 FC SW1 and FC SW2 are a pair of IBM SAN32C-6 or IBM SAN24B-6 Fibre Channel
switches.
򐂰 You may allocate different amounts of storage (drives) to each CTL and expansion
components.
򐂰 A gap of 1U is maintained below the expansion area to allow for power cabling routing.

1.12.2 Maximum configuration of an IBM FlashSystem 9200R Rack Solution


with Model A9F Expansion Enclosures
Figure 1-32 shows the maximum configuration of a IBM FlashSystem 9200R Rack Solution
with Model A9F Expansion Enclosures.

Figure 1-32 Maximum configuration of an IBM FlashSystem 9200R Rack Solution with Model A9F
Expansion Enclosures

58 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Notes about the maximum configuration with Model A9F Expansion
Enclosures
򐂰 A PDU with nine C19 rear outlets and three C13 front outlets is required.
򐂰 The product comes with cables that are appropriate for inter-system FC connectivity. You
must order extra cables for host and Ethernet connectivity.
򐂰 Any Model A9F Expansion Enclosures are installed in U2 - U6 and then U7-1.
򐂰 The CTLs and EXPs are stacked and cabled to the PDU power, with the highest capacity
at the bottom. You go upwards, with EXPn attached to CTLn in a bottom-up order by using
an SAS adapter on CTLn and cables.

1.12.3 Maximum configuration of a IBM FlashSystem 9200R Rack Solution


with Model AFF Expansion Enclosures
Figure 1-33 shows the maximum configuration of a IBM FlashSystem 9200R Rack Solution
with Model AFF Expansion Enclosures.

Figure 1-33 Maximum configuration IBM FlashSystem 9200R Rack Solution with Model AFF
Expansion Enclosures

Chapter 1. Introduction and system overview 59


Notes about the maximum configuration with Model AFF Expansion
Enclosures
򐂰 The 12x C13 PDU is selected on order.
򐂰 The product comes with cables that are appropriate for inter-system FC connectivity. You
must order extra cables for host and Ethernet connectivity.
򐂰 AG8 1 and AG8 2 are mandatory. From there, AG8 3 and AG8 4 can be optionally and
incrementally added.
򐂰 Adapter slot 1 of AG8 is dedicated to 32 Gb clustering usage.
򐂰 Adapter slot 2 of AG8 is used for your choice of a host adapter.
򐂰 Adapter slot 3 is an SAS adapter if it is required, or one of your choices if it is not.

1.12.4 FC cabling and clustering


The IBM FlashSystem 9200R Rack Solution has specialized internal cabling that is supplied
by the manufacturing plant before the machine is shipped to the customer. The plugging of
the inter-system internal FC cables is done on site at installation time.

Figure 1-34 shows the FC cabling on the rear of the IBM FlashSystem 92000R Control
Enclosure.

Figure 1-34 FC cabling on the rear of the IBM FlashSystem 92000R Control Enclosure

60 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Notes on FC cabling and clustering
򐂰 In a control enclosure, the upper node canister is upside down, so port composition is
reversed compared to the lower node canister.
򐂰 Adapter slot 1 is dedicated to clustering.
򐂰 Adapter slot 2 is a host connectivity adapter choice, for example, 32 Gb FC or 25 Gb
Ethernet.
򐂰 Adapter slot 3 is SAS for hybrid, or it is a host connectivity adapter choice.

Note: If there are multiple adapters, install the 32 G FC adapter first, then the 16 G FC
adapter, and then the 25 G Ethernet adapter.

From the top image, this “block diagram” depicts the rear composition of the IBM
FlashSystem 9200 system. It shows a simple composition to draw attention to the ports for
cabling.
– The upper canister (for example, node1) is numbered right to left.
– The lower canister (for example, node2) is numbered left to right.
– Numbers 1, 2, 3, and 4 are used to denote inter-cluster cabling. The items for CE, IBM
SSR, and LBS to refer to the cabling for the cluster switch.
– H depicts host-facing ports, which are a customer responsibility and a required
selection (otherwise, the hosts cannot use the storage).
– s/h is for attaching optional SAS expansion enclosures or more SAS hosts. The ones
with the lowercase h are an optional choice.
– Where a SAS adapter is not installed, use slot 3 for optional extra host-facing ports.
The h means that they are optional.

Chapter 1. Introduction and system overview 61


Figure 1-35 shows up to four IBM FlashSystem 9200 Control Enclosures and the port
notation for the inter-cluster connections.

Figure 1-35 IBM FlashSystem 9200R Rack Solution inter-cluster connections

Figure 1-35 through Figure 1-39 on page 66 shows the numeric cabling for clustering:
򐂰 CTL1 - CTL4 represent the relative rack position of 1 - 4 (min - max) IBM FlashSystem
9200 Control Enclosures within the rack.
򐂰 To denote the cable ports:
– N1P1 represents Node1 port 1, which is the farthest right port of the upper node
canister.
– N2P1 represents Node2 port 1, which is the lower node canister, leftmost port.

62 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.12.5 IBM FlashSystem 9200R Rack Solution FC configuration with 8977-T32
Cisco switches
Figure 1-36 shows the IIBM FlashSystem 9200R Rack Solution inter-system FC ports and
the connections to the 8977-T32 Cisco switch ports.

Figure 1-36 IBM FlashSystem 9200R Rack Solution FC with 8977-T32 Cisco switches

Notes about IBM FlashSystem 9200R Rack Solution FC with 8977-T32


Cisco switches
򐂰 Shows the Cisco cabling, where the upper part of Figure 1-36 represents the Cisco port
configuration layout.
򐂰 “1” is a cable number, which starts from CE1 node2 port1 and goes to the top Cisco switch
(SW1) port 0.
򐂰 In a minimal order of a IBM FlashSystem 9200R Rack Solution product, you order two
control enclosures (CE1 and CE2). Optionally, you may order (with the original order or as
an MES later) the third and fourth control enclosures.

Chapter 1. Introduction and system overview 63


1.12.6 IBM FlashSystem 9200R Rack Solution FC configuration with 8960-F34
Brocade switches
Figure 1-37 shows the IBM FlashSystem 9200 Rack Solution inter-system FC ports and the
connections to the 8960-F34 Brocade switch ports.

Figure 1-37 IBM FlashSystem 9200R Rack Solution with Brocade switches

Notes about IBM FlashSystem 9200R Rack Solution FC configuration


with 8960-F34 Brocade switches
򐂰 Shows the Brocade cabling diagram, where the upper part of Figure 1-37 represents the
Brocade port configuration layout.
򐂰 “1” is cable number, which starts from CE1 node2 port1, and goes to the top Brocade
switch (SW1) port 0.
򐂰 In a minimal order of a IBM FlashSystem 9200R Rack Solution product, you order two
control enclosures (CE1 and CE2). Optionally, you may order (with the original order or as
an MES later) the third and fourth control enclosures.

64 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.12.7 IBM FlashSystem 9200R Rack Solution SAS Expansion Enclosures
cabling
Figure 1-38 shows the SAS port connections for both the Model AFF and Model A9F
Expansion Enclosures.

Figure 1-38 IBM FlashSystem 9200R Rack Solution Model AFF and Model A9F SAS Expansion Enclosure ports

Ports A - H refer to the connections that are made to the IBM FlashSystem 9200 Control
Enclosures.

Chapter 1. Introduction and system overview 65


Figure 1-39 shows the cabling matrix from the IBM FlashSystem 9200 Control Enclosures
and the A9F and AFF Expansion Enclosures. These SAS cabling connections are performed
by the IBM SSR at installation time.

Figure 1-39 IBM FlashSystem 9200 SAS cabling matrix

򐂰 For the Model A9F expansion cabling:


– Cable A, depicted as “A”, connects CE1 Node2 port1 to EXP1’s right interface, right
port.
– Cable B, depicted as “B”, connects CE1 Node1 port1 to EXP1’s left interface, right port.
򐂰 For the Model AFF expansion cabling:
– Cable C, depicted as “C”, connects CE2 Node2 port1 to EXP2’s right interface, left
port.
– Cable D, depicted as “D”, connects CE2 Node1 port1 to EXP2’s left interface, left port.

Note: The “in” SAS port for the Model AFF is the leftmost of the rear pair of enclosure
interfaces. The “in” SAS port for the Model A9F is the rightmost of the rear pair of the
enclosure interfaces.

Because you can choose whether the EXP1 is either the Model A9F or the Model AFF, the
cable patterns are relatively the same, with the diagrams on the left showing the Model A9F
and the diagrams on the right showing the Model AFF.

66 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.13 IBM FlashCore Module drives, NVMe SSDs, and SCM
drives
This section describes the three types of drives that can be installed in the control enclosures:
򐂰 FCM drives
򐂰 NVMe SSDs
򐂰 SCM drives

Note: The SCM drives and XL FCM drives require IBM Spectrum Virtualize V8.3.1. or later
to be installed on the IBM FlashSystem Control Enclosure.

The following IBM FlashSystem products can support all three versions of these drives as
follows:
򐂰 IBM FlashSystem 9200 system
򐂰 IBM FlashSystem 9200R Rack Solution system
򐂰 IBM FlashSystem 7200 system
򐂰 IBM FlashSystem 9100 system
򐂰 IBM FlashSystem 5100 system

They are not supported in any of the expansion enclosures.

Note: For the feature codes for each type of drive, see the IBM FlashSystem product
announcement letter for each particular product.

Figure 1-40 shows an FCM (NVMe) with a capacity of 19.2 TB that is built by using 64-layer
Triple Level Cell (TLC) flash memory and an Everspin MRAM cache into a U.2 form factor.

Figure 1-40 IBM FlashCore Module (NVMe)

FCM drives (NVMe) are designed for high parallelism and optimized for 3D TLC and updated
FPGAs. IBM also enhanced the FCM drives by adding read cache to reduce latency on highly
compressed pages, and added and four-plane programming to lower the overall power during
writes. FCM drives offer hardware-assisted compression up to 3:1 and are FIPS 140-2
complaint.

FCM drives carry IBM Variable Stripe RAID (VSR) at the FCM level and use DRAID to protect
data at the system level. VSR and DRAID together optimize RAID rebuilds by offloading
rebuilds to DRAID, and they offer protection against FCM failures.

Chapter 1. Introduction and system overview 67


Table 1-15 shows the capacities of the FCM type drives.

Table 1-15 FCM type capacities


FCM module size Physical size (TBu)

Small 4.8 TBu

Medium 9.6 TBu

Large 19.2 TBu

XLarge 38.4 TBu

Industry-standard NVMe drives


All the IBM FlashSystem models that are described in this book provide an option to use
industry-standard NVMe drives, which are sourced from Samsung and Toshiba and available
in the following capacity variations: NVMe 800 GB, 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.

Table 1-16 shows the various NVMe drive size options.

Table 1-16 NVMe drive size options


Drive type Physical size (TBu)

NVMe Flash Drive 800 GB

NVMe Flash Drive 1.92 TB

NVMe Flash Drive 3.84 TB

NVMe Flash Drive 7.68 TB

NVMe Flash Drive 15.36 TB

NVMe and adapter support


NVMe is a NUMA-optimized, high-performance, and highly scalable storage protocol that is
designed to access non-volatile storage media by using a host PCIe bus. NVMe uses
low-latency and available parallelism, and reduces I/O impact. NVMe supports multiple I/O
queues up to 64 K queues, and each queue can support up to 64 K entries. Earlier
generations of SAS and Serial Advanced Technology Attachment (SATA) support a single
queue with only 254 and 32 entries and use many more CPU cycles to access data. NVMe
handles more workload for the same infrastructure footprint.

NVMe over Fabric (NVMe-oF) is a technology specification that is designed to enable NVMe
message-based commands to transfer data between a host computer and a target SSD or
system. Data is transferred over a network, such as Ethernet, FC, or InfiniBand.

Storage-class memory
SCM promises a massive improvement in performance (IOPS) that provides more density,
cost, and energy efficiency compared to flash drive technology. IBM Research® is actively
engaged in researching these new technologies.

For more information about nanoscale devices, see Storage Class Memory at Almaden.

68 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For a comprehensive overview of the flash drive technology see the SNIA Educational
Library.

These technologies will fundamentally change the architecture of today’s storage


infrastructures. Figure 1-41 shows the different types of storage technologies versus the
latency for Intel drives.

Figure 1-41 Storage technologies versus the latency for Intel drives

There are further major drive and technologies being introduced this year that coincide with
the new IBM FlashSystem 9200 and IBM FlashSystem 7200 families announcements. These
drives will also be supported on the IBM FlashSystem 9100 and IBM FlashSystem 5100
systems.
򐂰 Storage Class Memories: 30 DWPD with lowered latency
IBM deploys two types of SCM class drives:
– 3D XPoint Intel Optane (375 GB and 750 GB)
– Z-SSD Samsung Z-NAND flash drive (800 GB and 1600 GB)
򐂰 There are two supported RAID levels for the SCM drives:
– RAID 10 (two drives + a spare)
– DRAID 5 (four drives)

Table 1-17 shows the SCM drive options.

Table 1-17 SCM drive options


Drive type Physical size (TBu)

NVMe Storage Class Memory 375 GB


Drive

NVMe Storage Class Memory 750 GB


Drive

Chapter 1. Introduction and system overview 69


Drive type Physical size (TBu)

NVMe Storage Class Memory 800 GB


Drive

NVMe Storage Class Memory 1.6 TB


Drive

There is a restriction for where the slots can be placed: They can be placed only in slots 20 -
24 in the control enclosure.

IBM Easy Tier will support the SCM drives with a new tier called tier_scm.

Note: SCM type drives will support only DRAID 5 and RAID 10.

1.13.1 Support for host platforms, adapters, and switches


The following host platforms, adapters, and switches are supported:
򐂰 Host platforms:
– Red Hat Enterprise Linux 7.4
– CentOS 7.4
– VMware Elastic Sky X (ESX) 6.7
򐂰 Transport protocols:
– FC 4 x 16 Gb
– FC 4 x 32 Gb
– iSCSI 8 x 10 Gb
– Ethernet 2 x 1 Gb System Management
– iSER over RoCE with 2 x 25 Gb Mellanox ConnectX4-LX
– iSER over iWARP with 2 x 25 Gb Chelsio T6 adapters
– SAS expansion 2 x 12 Gb
򐂰 Switches:
– Cisco 3232C
– Arista 7060
– Dell

70 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.14 Software features and licensing
Figure 1-42 shows the software offerings that are orderable with the IBM FlashSystem 9100,
9200, and 7200 systems.

Figure 1-42 IBM FlashSystem 9100, 9200, and 7200 software included for base and optional licensing

1.14.1 IBM Spectrum Virtualize for IBM FlashSystem 9100, 9200, and 7200
systems
IBM FlashSystem 9100, 9200, and 7200 all use IBM Spectrum Virtualize Software, which
combines various software-defined functions for flash storage to manage data:
򐂰 Deduplication
򐂰 Compression
򐂰 Thin provisioning
򐂰 Easy Tier (automatic and dynamic tiering)
򐂰 Encryption for internal and virtualized external storage
򐂰 SCSI UNMAP
򐂰 HyperSwap (HA active-active)
򐂰 FlashCopy (snapshot)
򐂰 Remote data replication

For more information about IBM FlashSystem 9100 functional capabilities and software, see
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize
V8.2.1, SG24-7933.

Chapter 1. Introduction and system overview 71


1.14.2 IBM Multi-Cloud starter software for IBM FlashSystem 9100, 9200, and
7200 systems
IBM FlashSystem 9100 Multi-Cloud starter software with IBM Spectrum Virtualize includes
the following software stacks:
򐂰 IBM Spectrum Protect Plus
򐂰 IBM Spectrum Copy Data Management
򐂰 IBM Spectrum Virtualize for Public Cloud

The software bundle is included with IBM FlashSystem 9100, 9200, and 7200 Control
Enclosures. You use it to develop a multi-cloud strategy to harness the power of data. It
increases the flexibility to manage data through choice, security, and protection. Licensing
includes 5 TB of managed capacity, and it provides a base to migrate to a complete IBM
FlashSystem 9100 Multi-Cloud solution by following an IBM validated blueprint.

1.14.3 IBM FlashSystem 9100 Multi-Cloud solutions


IBM FlashSystem 9100 Multi-Cloud solutions are a set of solutions that are designed to
support today’s data-driven, multi-cloud architectures. They are NVMe-ready, cloud-enabled
software solutions that are validated according to a set of multi-cloud blueprints. These
solutions enable modernization of infrastructure by expanding IBM FlashSystem 9100, IBM
FlashSystem 9200, and IBM FlashSystem 7200 systems to be used for multi-cloud
architectures that include data protection, business continuity, and data reuse.

IBM FlashSystem 9100 Multi-Cloud Solution for Data Reuse, Protection,


and Efficiency
This solution is composed of IBM Spectrum Protect Plus and IBM Spectrum Copy Data
Management, and it is designed to secure and reuse your company’s most precious assets,
protect your data, and drive secondary data efficiencies within multi-cloud environments.

IBM FlashSystem 9100 Multi-Cloud Solution for Business Continuity and


Data Reuse
This solution is composed of IBM Spectrum Virtualize for Public Cloud and IBM Spectrum
Copy Data Management. By using IBM Spectrum Virtualize for Public Cloud, data can be
copied by using synchronous or asynchronous real-time replication from an IBM FlashSystem
9100 system to IBM Cloud. IBM Spectrum Copy Data Management can be used to create
secondary data reuse snapshots of data that is managed by IBM Spectrum Virtualize for
Public Cloud in IBM Cloud.

IBM FlashSystem 9100 Solution for Private Cloud Flexibility and Data
Protection
This solution is composed of IBM Spectrum Copy Data Management, and it is designed to
simplify and transform multi-cloud environments by combining private cloud management
with enabling tools, and it is managed by using a single user interface.

72 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1.14.4 IBM FlashSystem 9200 and 7200 Multi-Cloud solutions
Along with IBM Spectrum Virtualize on-premises, IBM Spectrum Virtualize for Public Cloud
V8.3.1 can enable clients to migrate data to and from supported public cloud providers,
including IBM Cloud and Amazon Web Services (AWS). Clients can create hybrid multicloud
solutions for their traditional block data and workloads by using built-in IP replication
capabilities.

IBM Spectrum Virtualize for Public Cloud is deployed on cloud infrastructure as a service
(IaaS) from IBM Cloud or AWS, either on bare metal servers in IBM Cloud or on Amazon
Elastic Compute Cloud (Amazon EC2) instances on AWS, and virtualizing AWS Amazon
Elastic Block Store (EBS) storage. Clients create clustered configurations like on-premises
while bringing the optimization and virtualization capabilities of IBM Spectrum Virtualize to
public cloud infrastructures. These capabilities include most of the key features of IBM
Spectrum Virtualize, such as FlashCopy, Transparent Cloud Tiering (TCT), thin provisioning,
GM, MM, GMCV, and Easy Tier.

IBM Spectrum Virtualize on-premises and IBM Spectrum Virtualize for Public Cloud together
enable a hybrid multicloud deployment with a single data management layer between
on-premises and the cloud across heterogeneous storage pools that might exist in the data
center. This solution has the following features:
򐂰 Storage pooling and automated allocation with thin provisioning
򐂰 Easy Tier automated tiering
򐂰 Deduplication and compression to reduce cloud storage costs
򐂰 FlashCopy and remote mirror for local snapshots and remote replication
򐂰 Support for virtualized and containerized server environments:
– VMware
– MicrosoftHyper-V
– IBM PowerVM®
– Red Hat OpenShift
– CRI-O
– Kubernetes

For more information about IBM FlashSystem and Hybrid Multi-Cloud, see Embracing Hybrid
Multicloud Storage Edition.

IBM Storage Insights


Cloud-based IBM Storage Insights provides a single dashboard that provides a clear view of
all IBM block storage. It enables predictive analysis and displays real-time and historical
charts to monitor performance and capacity. With storage health information, you can focus
on areas that need attention.

When IBM Support is needed, IBM Storage Insights simplifies uploading logs, speeds
resolution with online configuration data, and provides an overview of open tickets all in one
place. IBM Storage Insights Pro is a subscription service that provides longer historical views
of data, more reporting and optimization options, and supports IBM file and block storage
together with EMC VNX and VMAX.

For more information about the architecture and a design overview of IBM Storage Insights,
see IBM FlashSystem 9100 Architecture, Performance, and Implementation, SG24-8425.

Chapter 1. Introduction and system overview 73


IBM Storage Insights: Information and registration
For more information about IBM Storage Insights, see the following websites. The last link can
be used to sign up and register for this no-additional-charge service.
򐂰 Fact Sheet
򐂰 Demonstration
򐂰 IBM Knowledge Center: IBM Storage Insights
򐂰 IBM Storage Insights registration

1.14.5 IBM FlashSystem 9100, 9200, and 7200 high availability


IBM FlashSystem 9100, 9200, and 7200 systems offer high system and data availability with
the following features:
򐂰 HyperSwap support
򐂰 Dual-active intelligent node canisters with mirrored cache
򐂰 Dual-port flash drives with automatic drive failure detection and RAID rebuild
򐂰 Redundant hardware, including power supplies and fans
򐂰 Hot-swappable and customer replaceable components
򐂰 Automated path failover support for the data path between the server and the drives

1.15 Data protection on IBM FlashSystem 9100, 9200, and 7200


systems
Data protection from NAND chip and controller failures is managed by using two IBM
technologies: VSR and DRAID. VSR protects failures at the FCM drives chip level, and
DRAID protects data from a failure of FCM drives and industry-standard NVMe drives.

1.15.1 Variable Stripe RAID


VSR is a patented IBM technology that provides data protection at the page, block, or chip
level. It eliminates the necessity to replace a whole flash module when a single chip or plane
fails, which expands the life and endurance of flash modules and reduces maintenance
events throughout the life of the system.

For more information about VSR, see Introducing and Implementing IBM FlashSystem
V9000, SG24-8273.

1.15.2 DRAID
DRAID functions are managed by IBM Spectrum Virtualize, which enables a storage array to
distribute RAID 5 or RAID 6 to the largest set of drives. For example, on a traditional RAID 5,
if eight drives were used, the data was striped across seven drives and the parity was on the
eighth drive.

DRAID enhances this method by specifying the stripe width and the number of drives
separately. As a result, the setup still has seven data stripes that are protected by a parity
stripe, but the eight drives are selected from the larger set. In addition, with distributed
sparing, each drive in the array gives up some of its capacity to make a spare instead of an
unused spare drive.

74 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The benefit of DRAID is improved rebuild performance. During a drive failure, the data rebuild
is done from a larger set of drives, which increases the number of reads. The data is rebuilt to
a larger set of distributed sparing drives, which also increases the number of writes as
compared to traditional RAID (where reads are done from a smaller set of drives that are
written to a single drive).

DRAID 6
DRAID 6 is advised for IBM FlashSystem 9100, 9200, 7200, and 5100 systems. It is the only
allowed option from the GUI. DRAID 5 is configurable by using the CLI. DRAID 6 creates
spare space across all NVMe SSDs or FCM drives on the array and, during failure the array
rebuilds data by using the spare space faster than traditional RAID rebuilds.

DRAID rebuild
The spare area for rebuild is reserved against the physical capacity of the drives. As the
rebuild progresses, the data is copied to the remaining drives, which increases the capacity
threshold, as shown in Figure 1-43.

Figure 1-43 Physical capacity threshold

DRAID copyback
DRAID copyback is a similar process to the rebuild process. Upon completion, IBM
FlashSystem 9100, 9200, 7200, and 5100 systems release the space area by unmapping the
area that was used, as shown in Figure 1-44.

Figure 1-44 DRAID copyback

Chapter 1. Introduction and system overview 75


76 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2

Chapter 2. Planning
This chapter describes steps that are required to plan the installation and configuration of IBM
FlashSystem 9100, IBM FlashSystem 9200, IBM FlashSystem 7200, IBM FlashSystem 5100
systems in your storage network.

This chapter is not intended to provide in-depth information about the described topics; it
provides only general guidelines. For an enhanced analysis, see IBM FlashSystem 9200 and
9100 Best Practices and Performance Guidelines, SG24-8448 and IBM FlashSystem 9100
Architecture, Performance, and Implementation, SG24-8425.

Note: Make sure that the planned configuration is reviewed by IBM or an IBM Business
Partner before implementation. Such a review can both increase the quality of the final
solution and prevent configuration errors that might impact the solution delivery.

This chapter includes the following topics:


򐂰 2.1, “General planning rules” on page 78
򐂰 2.2, “Planning for availability” on page 79
򐂰 2.3, “Physical installation planning” on page 79
򐂰 2.4, “Planning for system management” on page 80
򐂰 2.5, “Connectivity planning” on page 81
򐂰 2.6, “Fibre Channel SAN configuration planning” on page 81
򐂰 2.7, “IP SAN configuration planning” on page 87
򐂰 2.8, “Back-end storage configuration” on page 90
򐂰 2.9, “Internal storage configuration” on page 92
򐂰 2.10, “Storage pool configuration” on page 92
򐂰 2.11, “Volume configuration” on page 94
򐂰 2.12, “Host attachment planning” on page 96
򐂰 2.13, “Planning copy services” on page 98
򐂰 2.14, “Data migration” on page 100
򐂰 2.15, “Performance monitoring with IBM Storage Insights” on page 101
򐂰 2.16, “Configuration backup procedure” on page 103

© Copyright IBM Corp. 2020. All rights reserved. 77


2.1 General planning rules
To maximize the benefit from a system, installation planning must include several important
steps. These steps ensure that the system provides the best possible performance, reliability,
and ease of management for your application needs.

The general rule of planning is to define your goals and then plan a solution that makes you
able to reach these goals.

Consider the following points when planning a system:


򐂰 Collect and document information about application servers (hosts) that you want to attach
to the system and their data:
– Amount of data in use for each host and growth plans.
– Data profile: Compressibility and deduplicability.
– Host traffic profile: Percentage of reads and writes, percentage of sequential/random
access patterns, and data block size.
– Host performance requirements: Input/output operations per second (IOPS) and
bandwidth.
򐂰 Perform capacity and performance sizing of a system:
– If any external back-end systems are going to be virtualized, assess their capacity and
performance capabilities.
– Calculate the number of drives or IBM FlashCore Module (FCM) drives that are needed
to satisfy your capacity requirements by considering your data compression ratios and
accounting for future growth.
– Verify that the capacity assessment results satisfy your performance requirements.

Note: Contact your IBM sales representative or IBM Business Partner to perform
these calculations.

򐂰 Assess your recovery point objective (RPO) / RTO requirements and plan for high
availability (HA) and Remote Copy (RC) functions. Decide whether you require a dual-site
deployment with the IBM HyperSwap feature, and decide whether you must implement RC
and determine its type (synchronous or asynchronous). Review the extra configuration
requirements that are imposed.
򐂰 Define the number of input/output (I/O) groups (control enclosures) and expansion
enclosures. The number of necessary enclosures depends on the solution type, overall
performance, and capacity requirements.
򐂰 Plan for host attachment interfaces, protocols, and storage area network (SAN). Consider
the number of ports, bandwidth requirements, and HA.
򐂰 Perform configuration planning by defining the number of internal storage arrays and
external storage arrays that will be virtualized. Define a number and the type of pools, the
number of volumes, and the capacity of each of the volumes.
򐂰 Define a naming convention for the system nodes, volumes, and other storage objects.
򐂰 Plan a management Internet Protocol (IP) network and management users’ authentication
system.
򐂰 Plan for the physical location of the equipment in the rack.
򐂰 Verify that your planned environment is a supported configuration.

78 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Use IBM System Storage Interoperation Center (SSIC) to check compatibility.

򐂰 Verify that your planned environment does not exceed system configuration limits.

Note: For more information about your platform and code version, see Configuration
Limits and Restrictions.

򐂰 Review the planning aspects that are described in the following sections of this chapter.

2.2 Planning for availability


When planning the deployment of the IBM FlashSystem family solution, avoid creating single
points of failure (SPOFs). Plan your system availability according to the requirements of your
solution. Depending on your availability needs, consider the following aspects:
򐂰 Single-site or multi-site configuration
Multi-site configurations increase solution resiliency, and can be the basis of disaster
recovery (DR) solutions. Systems can be configured as a multi-site solution with sites
working in active-active mode. Both synchronous and asynchronous data replication is
supported by multiple inter-site link options.
򐂰 Physical separation of system building blocks
A dual-rack deployment might increase the availability of your system if your back-end
storage, SAN, and local area network (LAN) infrastructure also do not use a single-rack
placement scheme. You can further increase system availability by ensuring that
enclosures are powered from different power circuits and in different fire protection zones.
򐂰 Quorum disk placement
For a deployment with multiple I/O groups, plan for a quorum device on an external
back-end system or an IP quorum application. IP quorum applications must be deployed
on hosts that do not depend on storage that is provisioned by a system. Multiple IP
quorum application deployment is recommended.

2.3 Physical installation planning


You must consider several key factors when you plan the physical site of a system. The
physical site must have the following characteristics:
򐂰 Sufficient rack space must exist to install controller and disk enclosures.
򐂰 The site must meet the power, cooling, and environmental requirements.

For more information about power and environmental requirements, see IBM Knowledge
Center and expand Planning → Planning for hardware → Physical installation planning,
and then seletc Control enclosure requirements and SAS expansion enclosure
requirements.

Your system order includes a printed copy of the Quick Installation Guide, which also provides
information about environmental and power requirements.

Chapter 2. Planning 79
Create a cable connection table that follows your environment’s documentation procedure to
track the following connections that are required for the setup:
򐂰 Power
򐂰 Serial-attached Small Computer System Interface (SCSI) (SAS)
򐂰 Ethernet
򐂰 Fibre Channel (FC)

When planning for power, plan for a separate independent power source for each of the two
redundant power supplies of a system enclosure.

Distribute your expansion enclosures between control enclosures and SAS chains. For more
information, see 13.1.4, “Enclosure SAS cabling” on page 777 and IBM Knowledge Center
and expand Installing → Installing hardware → Installing the system hardware →
Connecting the components → Connecting optional expansion enclosures to the
control enclosure.

When planning SAN cabling, make sure that your physical topology adheres to zoning rules
and recommendations.

The physical installation and initial setup of IBM FlashSystem 9100 and 9200 is performed by
an IBM System Services Representative (IBM SSR).

2.4 Planning for system management


Each system’s node has a technician port. It is a dedicated 1 gigabits per second (Gbps)
Ethernet port. The initialization of a system and its basic configuration is performed by using
this port. After the initialization is complete, the technician port must remain disconnected
from a network and used only to service the system.

For management, each system node requires at least one Ethernet connection. The cable
must be connected to port 1, which is a 10 Gbps Ethernet port (it does not negotiate speeds
below 1 Gbps). For increased availability, an optional management connection may be
configured over Ethernet port 2.

For configuration and management, you must allocate an IP address to each node canister,
which is referred to as the service IP address. Both Internet Protocol Version 4 (IPv4) and
Internet Protocol Version 6 (IPv6) are supported.

In addition to a service IP address on each node, each system has a cluster management IP
address. The management IP address cannot be the same as any of the defined service IP
addresses. The management IP can automatically fail over to another address if there are
maintenance actions or a node failure.

For example, a system that consists of two control enclosures requires a minimum of five
unique IP addresses: one for each node and one for the system as a whole.

Ethernet ports 1 and 2 are not reserved only for management. They may be also used for
internet Small Computer Systems Interface (iSCSI) or IP replication traffic if they are
configured to do so. However, management and service IP addresses cannot be used for
host or back-end storage communication.

80 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
System management is performed by using an embedded GUI that is running on the nodes;
the command-line interface (CLI) is also available. To access the management GUI, point a
web browser to the system management IP address. To access the management CLI, point a
Secure Shell (SSH) client to a management IP and use the default SSH protocol port
(22/TCP).

By connecting to a service IP address with a browser, you can use SSH to access the Service
Assistant Interface, which may be used for maintenance and service tasks.
When you plan your management network, note that the IP Quorum applications and
Transparent Cloud Tiering (TCT) are communicating with a system through the management
ports. For more information about cloud backup requirements, see 10.3, “Transparent Cloud
Tiering” on page 576.

2.5 Connectivity planning


An IBM FlashSystem system offers a wide range of connectivity options to back-end storage
and hosts, such as FC technologies (“traditional” SCSI FC and Non-Volatile Memory Express
(NVMe) over Fibre Channel (FC-NVMe) (also known as NVMe over Fabric (NVMe-oF))) and
IP network technologies (iSCSI and iSCSI Extensions for Remote Direct Memory Access
(RDMA) (iSER)). The connection options depend on the hardware configuration.

Table 2-1 lists the communication types that can be used for communicating between system
nodes, hosts, and back-end storage systems. All types can be used concurrently.

Table 2-1 Communication options


Communication System to host System to Node to node System to
type back-end (intra-cluster) system
storage (replication)

SCSI FC Yes Yes Yes Yes

FC-NVMe Yes No No No

iSCSI Yes Yes No Noa

iSER Yes No Yes Noa


a. Replication traffic can be sent over an IP network with native IP replication, which can be
configured on both onboard 10-Gigabit Ethernet (GbE) ports and optional 25 GbE ports.

2.6 Fibre Channel SAN configuration planning


Each node canister may be equipped with one, two, or three 4-port 16 Gbps or 32 Gbps FC
adapters (the maximum number of adapters depends on the system hardware type) that are
used for SCSI FC and FC-NVMe attachment.

2.6.1 Physical topology


The switch configuration for a fabric must comply with the switch manufacturer’s configuration
rules, which can impose restrictions. For example, a switch manufacturer might limit the
number of supported switches or ports in a SAN fabric. Operating outside of the switch
manufacturer’s rules is not supported.

Chapter 2. Planning 81
In an environment where you have a fabric with mixed port speeds (8 Gb, 16 Gb, and 32 Gb),
the best practice is to connect the system to the switch operating at the highest speed.

The connections between the system’s enclosures (node-to-node traffic) and between a
system and the virtualized back-end storage require the best available bandwidth. For optimal
performance and reliability, ensure that paths between the system nodes and storage
systems do not cross inter-switch links (ISLs). If you use ISLs on these paths, make sure that
sufficient bandwidth is available. SAN monitoring is required to identify faulty ISLs.

No more than three ISL hops are permitted among nodes that are in the same system but in
different I/O groups. If your configuration requires more than three ISL hops for nodes that are
in the same system but in different I/O groups, contact your IBM Support Center.

Direct connection of the system FC ports to host systems or between nodes in the system
without using an FC switch is supported. For more information, see IBM Knowledge Center
and expand Planning → Planning your network and storage network → Planning for a
direct-attached configuration.

For the topology requirements for HyperSwap configurations, see IBM Storwize V7000,
Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317.

2.6.2 Zoning
A SAN fabric must have four distinct zone classes:
Inter-node zones For communication between nodes in the same system
Storage zones For communication between the system and back-end storage
Host zones For communication between the system and hosts
Inter-system zones For remote replication

Figure 2-1 shows the system zoning classes.

Figure 2-1 System zoning

82 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The fundamental rules of system zoning are described in the rest of this section. However,
you should review the latest zoning guidelines and requirements when designing zoning for
the planned solution by going to IBM Knowledge Center and expanding Configuring →
Configuration details → SAN configuration and zoning rules summary.

2.6.3 N_Port ID Virtualization


N_Port ID Virtualization (NPIV) is a method for virtualizing a physical FC port that is used for
host I/O. By default, all new systems work in NPIV mode (the Target Port Mode attribute is set
to Enabled).

NPIV mode creates a virtual worldwide port name (WWPN) for every system physical FC
port. This WWPN is available only for host connection. During node maintenance, restart, or
failure, the virtual WWPN from that node is transferred to the same port of the other node in
the I/O group.

For more information about NPIV mode and how it works, see Chapter 7, “Hosts” on
page 369.

Ensure that the FC switches give each physically connected system port the ability to create
four more NPIV ports.

When performing zoning configuration, virtual WWPNs are used only for host communication,
that is, “system to host” zones must include virtual WWPNs, and internode, intersystem, and
back-end storage zones must use the WWPNs of physical ports. Ensure that equivalent ports
(with the same port ID) are on the same fabric and in the same zone.

For more information about other host zoning requirements, see IBM Knowledge Center and
expand Configuring → Configuration details → Zoning details → Zoning requirements
for N_Port ID Virtualization.

2.6.4 Inter-node zone


The purpose of intracluster or inter-node zones is to enable traffic between all node canisters
within the clustered system. This traffic consists of heartbeats, cache synchronization, and
other data that nodes must exchange to maintain a healthy cluster state.

Traffic between nodes in one control enclosure is sent over a PCIe connection over an
enclosure backplane. However, for redundancy, you must configure an inter-node SAN zone
even if you have a single I/O group system. For a system with multiple I/O groups, all traffic
between control enclosures must pass through a SAN.

A system node cannot have more than 16 fabric paths to another node in the same system.

2.6.5 Back-end storage zones


Create a separate zone for each back-end storage subsystem that is virtualized. Switch
zones that contain back-end storage system ports must not have more than 40 ports. A
configuration that exceeds 40 ports is not supported.

All nodes in a system must connect to the same set of back-end storage system ports on
each device.

Chapter 2. Planning 83
If the edge devices contain more stringent zoning requirements, follow the storage system
rules to further restrict the system zoning rules.

For more information connecting back-end storage systems, see IBM Knowledge Center and
expand Configuring → Configuration details → External storage system configuration
details (Fibre Channel) and Configuring → Configuring and servicing storage
systems → External storage system configuration with Fibre Channel connections.

2.6.6 Host zones


A host must be zoned to an I/O group to access volumes that are presented by this I/O group.

The preferred zoning policy is single initiator zoning. To implement it, create a separate zone
for each host bus adapter (HBA) port, and place exactly one port from each node in each I/O
group that the host accesses in this zone. For deployments with more than 64 hosts that are
defined in the system, this host zoning scheme is mandatory.

For smaller installations, you may have up to 40 FC ports (including both host HBA ports and
the system’s virtual WWPNs) in a host zone if the zone contains similar HBAs and operating
systems (OSs). A valid zone can be 32 host ports plus eight system ports.

FC-NVMe applies more limits to the host zone configuration:


򐂰 Zone up to four host ports to detect up to four ports on a node, and zone the same or more
host ports to detect an extra four ports on the second node of the I/O group.
򐂰 Zone a total maximum of 16 hosts to detect a single I/O group.

Consider the following rules for zoning hosts over either SCSI or FC-NVMe:
򐂰 For any volume, the number of paths through the SAN from the host to a system must not
exceed eight. For most configurations, four paths to an I/O group are sufficient.
Except by zoning, you can use a port mask to control the number of host paths. For more
information, see Chapter 7, “Hosts” on page 369.
򐂰 Balance the host load across the system’s ports. For example, zone the first host with
ports 1 and 3 of each node in I/O group, zone the second host with ports 2 and 4, and so
on. To obtain the best overall performance of the system, the load of each port should be
equal. Assuming that a similar load is generated by each host, you can achieve this
balance by zoning approximately the same number of host ports to each port.
򐂰 Spread the load across all system ports. Use all ports that are available on your machine.
򐂰 Balance the host load across HBA ports. If the host has more than one HBA port per
fabric, zone each host port with a separate group of system ports.

All paths must be managed by the multipath driver on the host side. Make sure that the
multipath driver on each server can handle the number of paths that is required to access all
volumes that are mapped to the host.

84 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2.6.7 Zoning considerations for Metro Mirror and Global Mirror
The SAN configurations that use inter-cluster Metro Mirror (MM) and Global Mirror (GM)
relationships have the following extra switch zoning requirements:
򐂰 If two ISLs are connecting the sites, split the ports from each node between the ISLs, that
is, exactly one port from each node must be zoned across each ISL.
򐂰 Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.
򐂰 Review the latest requirements and recommendations at IBM Knowledge Center by
selecting Configuring → Configuration details → Zoning details → Zoning
constraints for Metro Mirror and Global Mirror.

When designing zoning for a geographically dispersed solution, consider the effect of the
cross-site links on the performance of the local system.

Using mixed port speeds for intercluster communication can lead to port congestion, which
can negatively affect the performance and resiliency of the SAN. Therefore, it is not
supported.

Note: If you limit the number of ports that are used for remote replication to two ports on
each node, you can limit the effect of a severe and abrupt overload of the intercluster link
on system operations.

If all node ports (N_Ports) are zoned for intercluster communication and the intercluster
link becomes severely and abruptly overloaded, the local FC fabric can become congested
so that no FC ports on the local system can perform local intracluster communication,
which can result in cluster consistency disruption.

For more information about how to avoid such situations, see 2.6.8, “Port designation
recommendations” on page 85.

For more information about zoning best practices, see IBM FlashSystem 9200 and 9100 Best
Practices and Performance Guidelines, SG24-8448.

2.6.8 Port designation recommendations


If you have enough available FC ports on the system, designate different types of traffic to
different ports. This configuration provides a level of protection against malfunctioning devices
and workload spikes that might otherwise impact the system.

Intra-cluster communication must be protected because it is used for heartbeat and metadata
exchange between all nodes of all I/O groups of the cluster.

In solutions with multiple I/O groups, upgrade nodes beyond the standard four FC port
configuration. This upgrade provides an opportunity to dedicate ports to local node traffic,
which separates them from other cluster traffic on the remaining ports.

Isolating remote replication traffic to dedicated ports is beneficial because it ensures that any
problems that affect the cluster-to-cluster interconnect do not affect all ports on the local
cluster.

Chapter 2. Planning 85
To isolate both node-to-node and system-to-system traffic, use the port designations that are
shown in Figure 2-2.

Figure 2-2 Port masking configuration

When planning masking, consider the following examples:


򐂰 A system with a single control enclosure (I/O group) and without replication: No port
dedication and masking are required. Inter-node traffic is sent over a backplane.
򐂰 A HyperSwap system with two control enclosures: Dedicate ports for inter-node traffic and
apply an FC mask.
򐂰 A standard topology system with four I/O groups: The masking setup depends on the
storage configuration, so more planning is required.

To achieve traffic isolation, use a combination of SAN zoning and local and partner port
masking. For more information about how to send port masks, see Chapter 3, “Initial
configuration” on page 105.

Alternative port mappings that spread traffic across HBAs might allow adapters to come back
online after a failure. However, they do not prevent a node from going offline temporarily to
restart and attempt to isolate the failed adapter and then rejoin the cluster. Also, the mean
time between failures (MTBF) of the adapter is not significantly shorter than that of the
non-redundant node components. The approach that is presented here accounts for all these
considerations with the idea that increased complexity can lead to migration challenges in the
future, so a simpler approach is better.

86 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2.7 IP SAN configuration planning
Each system node is equipped with four onboard 10 Gbps Ethernet network interface ports.
They can operate with link speeds of 1 Gbps and 10 Gbps. Any of them can be used for host
I/O with the iSCSI protocol, external storage virtualization with iSCSI, and for native IP
replication. Also, ports 1 and 2 may be used for managing the system.

Each node may also be configured with one, two, or three 2-port 25 Gbps RDMA-capable
Ethernet adapters. The maximum number of adapters depends on the system hardware type.
Adapters can auto-negotiate link speeds 1 - 25 Gbps. All their ports may be used for host I/O
with iSCSI or iSER, external storage virtualization with iSCSI, node-to-node traffic, and for IP
replication.

You can set virtual local area network (VLAN) settings to separate network traffic for Ethernet
transport. The system supports VLAN configurations for the system, host attachment, storage
virtualization, and IP replication traffic. VLANs can be used with priority flow control (PFC)
(IEEE 802.1Qbb).

All ports may be configured with an IPv4 address, an IPv6 address, or both. Each application
of a port needs a separate IP. For example, port 1 of every node can be used for
management, iSCSI, and IP replication, but three unique IP addresses are required.

If node Ethernet ports are connected to different isolated networks, then a different subnet
must be used for each network.

2.7.1 iSCSI and iSER protocols


The iSCSI protocol is a block-level access protocol that encapsulates SCSI commands into
TCP/IP packets. Therefore, iSCSI uses an IP network rather than requiring the FC
infrastructure.

The iSER is a network protocol that extends iSCSI to use RDMA. RDMA is provided by either
the internet Wide Area RDMA Protocol (iWARP) or RDMA over Converged Ethernet (RoCE).
It permits data to be transferred directly into and out of SCSI buffers, providing faster
connection and processing time than traditional iSCSI connections.

iSER requires optional 25 Gbps RDMA-capable Ethernet cards. RDMA links work only
between RoCE ports or between iWARP ports: from a RoCE node canister port to a RoCE
port on a host, or from an iWARP node canister port to an iWARP port on a host. So, there
are two types of 25 Gbps adapters that are available for a system, and they cannot be
interchanged without a similar RDMA type change on the host side.

Either iSCSI or iSER works for standard iSCSI communications, that is, ones that do not use
RDMA.

The 25 Gbps adapters come with SFP28 fitted, which can be used to connect to switches that
use OM3 optical cables.

For more information about the Ethernet switches and adapters that are supported by iSER
adapters, see SSIC.

Note: At the time of writing, connecting a 10 Gb switch to a 25 Gb interface is supported


only through a SCORE request. For more information, contact your IBM representative.

Chapter 2. Planning 87
2.7.2 Priority flow control
PFC is an Ethernet protocol that you can use to select the priority of different types of traffic
within the network. With PFC, administrators can reduce network congestion by slowing or
pausing certain classes of traffic on ports, thus providing better bandwidth for more important
traffic. The system supports PFC on various supported Ethernet-based protocols on three
types of traffic classes: system (node-to-node), host attachment, and back-end storage traffic.

You can configure a priority tag for each of these traffic classes. The priority tag can be any
value 0 - 7. You can set identical or different priority tag values to all these traffic classes. You
can also set bandwidth limits to ensure quality of service (QoS) for these traffic classes by
using the Enhanced Transmission Selection (ETS) setting on the network.

To use PFC and ETS, ensure that the following tasks are completed:
򐂰 Configure a VLAN on the system to use PFC capabilities for the configured IP version.
򐂰 Ensure that the same VLAN settings are configured on the all entities, including all
switches between the communicating end points.
򐂰 On the switch, enable Data Center Bridging Exchange (DCBx). DCBx enables switch and
adapter ports to exchange parameters that describe traffic classes and PFC capabilities.
For these steps, check your switch documentation for details.
򐂰 For each supported traffic class, configure the same priority tag on the switch. For
example, if you plan to have a priority tag setting of 3 for storage traffic, ensure that the
priority is also set to 3 on the switch for that traffic type.
򐂰 If you are planning on using the same port for different types of traffic, ensure that ETS
settings are configured on the network.

For more information, see IBM Knowledge Center and expand Configuring → Configuring
priority flow control.

2.7.3 RDMA clustering


An IBM FlashSystem system may use 25 Gbps cards for node-to-node traffic. A dual-site
HyperSwap configuration can also use the cards for an inter-site link.

A minimum of two dedicated RDMA-capable ports are required for node-to-node RDMA
communications to ensure best performance and reliability. These ports must be configured
for inter-node traffic only and cannot be used for host attachment, virtualization of
Ethernet-attached external storage, or IP replication traffic.

The following limitations apply to a configuration of ports that are used for RDMA-clustering:
򐂰 Only IPv4 addresses are supported.
򐂰 Only the default value of 1500 is supported for the maximum transmission unit (MTU).
򐂰 Port masking is not supported on RDMA-capable Ethernet ports. Due to this limitation, do
not exceed the maximum of four ports for node-to-node communications.
򐂰 Node-to-node communications that use RDMA-capable Ethernet ports are not supported
in a network configuration that contain more than two hops in the fabric of switches.

For more information, see IBM Knowledge Center and expand Configuring →
Configuration details → Configuration details for using RDMA-capable Ethernet ports
for node-to-node communications.

88 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Before you configure a system that uses RDMA-capable Ethernet ports for
node-to-node communications in a standard or HyperSwap topology system, contact your
IBM representative.

2.7.4 iSCSI back-end storage attachment


An IBM FlashSystem system supports the virtualization of external storage systems that are
attached through iSCSI. Onboard 10 Gbps Ethernet ports or optional 25 Gbps Ethernet ports
may be used. The 25 GbE network interface controllers (NICs) work in plain iSCSI mode
without using any RDMA capabilities.

Consider the following items when planning for iSCSI virtualization:


򐂰 Direct attachment between the system and external storage systems is not supported,
and requires Ethernet switches between the system and the external storage.
򐂰 To avoid a SPOF, a dual-switch configuration is recommended. For full redundancy, a
minimum of two paths between each initiator node and target node must be configured
with each path going through a separate switch.
򐂰 Extra paths can be configured to increase throughput if both initiator and target nodes
support more ports.

All planning and implementation aspects of external storage virtualization with iSCSI are
described in detail in iSCSI Implementation and Best Practices on IBM Storwize Storage
Systems, SG24-8327.

2.7.5 IP network host attachment


You can attach the system to iSCSI or iSER hosts by using the Ethernet ports of the systems.

For each Ethernet port on a node, a maximum of one IPv4 address and one IPv6 address can
be designated for iSCSI or iSER I/O. You can configure the internet Storage Name Service
(iSNS) to facilitate a scalable configuration and management of iSCSI storage devices.

The same ports can be used for iSCSI and iSER host attachment concurrently, but a single
host can establish either an iSCSI or iSER session, but not both.

iSCSI or iSER hosts connect to the system through the node-port IP addresses, which are
assigned to the Ethernet ports of the node. If the node fails, the address becomes unavailable
and the host loses communication with the system through that node. To allow hosts to
maintain access to data, the node-port IP addresses for the failed node are transferred to the
partner node in the I/O group. The partner node handles requests for both its own node-port
IP addresses and also for node-port IP addresses on the failed node. This process is known
as node-port IP failover. In addition to node-port IP addresses, the iSCSI name and iSCSI
alias for the failed node are also transferred to the partner node. After the failed node
recovers, the node-port IP address and the iSCSI name and alias are returned to the original
node.

Note: The cluster name and node name form parts of the iSCSI name. Changing any of
them might require reconfiguration of all iSCSI hosts that communicate with the system.

Chapter 2. Planning 89
iSER supports only one-way authentication through the Challenge Handshake Authentication
Protocol (CHAP). iSCSI supports two types of CHAP authentication: one-way authentication
(iSCSI target authenticating iSCSI initiators) and two-way (mutual) authentication (iSCSI
target authenticating iSCSI initiators, and vice versa).

For more information about iSCSI host attachment, see iSCSI Implementation and Best
Practices on IBM Storwize Storage Systems, SG24-8327.

Make sure that iSCSI initiators, host iSER adapters, and Ethernet switches that are attached
to the system are supported by using SSIC.

2.7.6 Native IP replication


Two systems can be linked over native IP links that are connected directly or by Ethernet
switches to perform RC functions. RC over native IP provides a less expensive alternative to
using FC configurations.

IP replication is supported on both onboard 10G bps Ethernet ports and optional 25 Gbps
Ethernet ports. However, when configured over 25 Gbps ports, it does not use RDMA
capabilities, and it does not provide a performance improvement compared to 10 Gbps ports.

As a best practice, use a different port for iSCSI host I/O and IP partnership traffic. Also, use
a different VLAN ID for iSCSI host I/O and IP partnership traffic.

Specific intersite link requirements must be met when you are planning to use IP partnership
for RC. These requirements are described at IBM Knowledge Center by selecting
Configuring → Configuring IP partnerships → Intersite link planning. Also, see
Chapter 10, “Advanced Copy Services” on page 505.

2.7.7 Firewall planning


After you have your IP network planned, set up the appropriate firewall rules for each data
flow.

For a list of mandatory and optional network flows that are required for operating, see IBM
Knowledge Center and expand Planning → Planning for hardware → Physical
installation planning → IP address allocation and usage.

2.8 Back-end storage configuration


External back-end storage systems (also known as controllers) provide their logical volumes
(LUs), which are detected by a system as managed disks (MDisks) and can be used in
storage pools to provision their capacity to system’s hosts.

The back-end storage subsystem configuration must be planned for all external storage
systems that are attached. Apply the following general guidelines:
򐂰 Most of the supported FC-attached storage controllers must be connected through an FC
SAN switch. However, a limited number of systems (including IBM FlashSystem 900 and a
member of the Storwize family) can be direct-attached by using FC.
򐂰 Connect all back-end storage ports to the SAN switch up to a maximum of 16 and zone
them to all of the system to maximize bandwidth. The system is designed to handle many
paths to the back-end storage.

90 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 The cluster can be connected to a maximum of 1024 worldwide node names (WWNNs).
The general practice is that:
– EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each
port appears as a separate controller to the system.
– IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each port appears as a
part of a subsystem with multiple ports, with up to a maximum of 16 ports (WWPNs)
per WWNN.
However, if you plan for a configuration that might be limited by the WWNN maximum,
verify the WWNN versus WWPN policy with the back-end storage vendor.
򐂰 When defining a controller configuration, avoid hybrid configurations and automated
tiering solutions. Create LUs for provisioning to the system from a homogeneous disk
arrays or solid-state drive (SSD) arrays.
򐂰 Do not provision all available drives on the back-end storage capacity as a single LU. A
best practice is to create one LU for eight HDDs or SSDs for the back-end system.
򐂰 If your back-end storage system is not supported by the round-robin path policy, ensure
that the number of MDisks per storage pool is a multiple of the number of storage ports
that are available. This approach ensures sufficient bandwidth for the storage controller,
and an even balance across storage controller ports.
򐂰 An IBM FlashSystem system must have exclusive access to every LU that is provisioned
to it from a back-end controller. Any specific LU cannot be presented to more than one
system. Presenting the same back-end LU to a system and a host is not allowed.
򐂰 Data reduction (compression and deduplication) on the back-end controller is supported
only with a limited set of IBM Storage systems.

In general, configure back-end controllers as though they are used as stand-alone systems.
However, there might be specific requirements or limitations as to the features that are usable
in the specific back-end storage system. For more information about the requirements that
are specific to your back-end controller, see IBM Knowledge Center and expand
Configuring → Configuring and servicing storage systems.

The system’s large cache and advanced cache management algorithms also allow it to
improve the performance of many types of underlying disk technologies. Because hits to the
cache can occur in the upper (the system itself) and the lower (back-end controller) level of
the overall solution, the solution as a whole can use the larger amount of cache wherever it is.
Therefore, the system’s cache also provides more performance benefits for back-end storage
systems with extensive cache banks.

However, the system cannot increase the throughput potential of the underlying disks in all
cases. The performance benefits depend on the underlying back-end storage technology and
the workload characteristics, including the degree to which the workload exhibits hotspots or
sensitivity to cache size or cache algorithms.

Chapter 2. Planning 91
2.9 Internal storage configuration
For general-purpose storage pools with various I/O applications, follow the storage
configuration wizard recommendations in the GUI. For specific applications with known I/O
patterns, use the CLI to create arrays that suit your needs.

An array-level recommendation for all types of internal storage is distributed redundant array
of independent disks (DRAID) 6 (DRAID 6). DRAID 6 outperforms other available redundant
array of independent disks (RAID) levels in most applications while providing fault tolerance
and high rebuild speeds.

For more information about internal storage configuration, see IBM FlashSystem 9200 and
9100 Best Practices and Performance Guidelines, SG24-8448.

2.10 Storage pool configuration


The storage pool is at the center of the many-to-many relationship between the internal drive
arrays or externally virtualized logical unit numbers (LUNs), which are represented as
MDisks, and the volumes. It acts as a container of physical disk capacity from which chunks
of MDisk space, which is known as extents, are allocated to form volumes that are presented
to hosts.

The system supports two types of pools: Data Reduction Pools (DRP) and standard pools.
The type is configured when a pool is created and it cannot be changed later. The type of the
pool determines the set of features that is available on the system:
򐂰 Features that can be implemented only with standard pools are:
– Child pools
– VMware vSphere integration with VMware vSphere Virtual Volumes (VVOLs)
– Multi-tenancy with ownership groups
򐂰 Features that can be implemented only with DRPs are:
– Automatic capacity reclamation with SCSI UNMAP (this feature returns capacity that is
marked as no longer used by a host back to storage pool)
– DRP compression (in-flight data compression)
– DRP deduplication

In addition to providing data reduction options, DRP amplifies the I/O and CPU workload,
which should accounted for during performance sizing and planning.

Also, self-compressing drives (FCM drives) still perform compression independently of the
pool type.

Another base storage pool parameter is the extent size. There are two implications of a
storage pool extent size:
򐂰 The maximum volume, MDisks, and managed storage capacity depend on the extent size.
The bigger the extent that is defined for the specific pool, the larger is the maximum size of
this pool, the maximum MDisk size in the pool, and the maximum size of a volume that is
created in the pool.
򐂰 The volume sizes must be a multiple of the extent size of the pool in which the volume is
defined. Therefore, the smaller the extent size, the better control that you have over the
volume size.

92 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The system supports extent sizes 16 - 8192 mebibytes (MiB). The extent size is a property of
the storage pool and it is set when the storage pool is created.

Note: The base pool parameters, pool type, and extent size are set during pool creation
and cannot be changed later. If you must change extent size or pool type, all volumes must
be migrated from a storage pool and then the pool itself must be deleted and re-created.

For more information about the relationship between a system’s maximum configuration and
extent size, see to Configuration Limits and Restrictions and look for your platform and code
version.

When planning pools, the encryption is defined on a pool level and the encryption setting
cannot be changed after a pool is created. If you create an unencrypted pool, there is no way
to encrypt it later. Your only option is to delete it and re-create it as encrypted.

When planning storage pool layout, consider the following aspects:


򐂰 Pool reliability, availability, and serviceability (RAS):
– The storage pool is a failure domain. If one array or external MDisk is unavailable, the
pool and all volumes in it goes offline.
– The number and size of storage pools affects system availability. Using a larger
number of smaller pools reduces the failure domain if one of the pools goes offline.
However, increasing the number of storage pools affects the storage use efficiency,
and the number is subject to the configuration maximum limit.
– You cannot migrate volumes between storage pools with different types or extent sizes.
However, you can use volume mirroring to create copies between storage pools.
򐂰 Pool performance:
– Do not mix same-tier arrays or MDisks with different performance characteristics in one
pool. For example, do not use DRAID 6 arrays of six tier 1 SSDs and DRAID 6 arrays of
24 tier 1 SSDs in the same pool. This technique is the only way to ensure consistent
performance characteristics of volumes that are created from the pool.
Arrays with different tiers in one pool may be used because their performance
differences become beneficial when you use the Easy Tier function.
– Create multiple storage pools if you must isolate specific workloads to separate
storage.
– Ensure that performance sizing was done for selected pool type and feature set.

2.10.1 The storage pool and cache relationship


The system uses cache partitioning to limit the potential negative effects that a poorly
performing storage controller can have on the clustered system. The cache partition
allocation size is based on the number of configured storage pools. This design protects
against an individual overloaded back-end storage system from filling the system write cache
and degrading the performance of the other storage pools.

Chapter 2. Planning 93
Table 2-2 lists the limits of the write-cache data that can be used by a single storage pool.
Table 2-2 Limits of the cache data
Number of storage pools Upper limit

1 100%

2 066%

3 040%

4 030%

5 or more 025%

No single partition can occupy more than its upper limit of write cache capacity. When the
maximum cache size is allocated to the pool, the system starts to limit incoming write I/Os for
volumes that are created from the storage pool. The host writes are limited to the destage
rate on a one-out-one-in basis.

Only writes that target the affected storage pool are limited. The read I/O requests for the
throttled pool continue to be serviced normally. However, because the system is offloading
cache data at the maximum rate that the back-end storage can sustain, read response times
are expected to be affected.

All I/O that is destined for other (non-throttled) storage pools continues as normal.

2.11 Volume configuration


When planning a volume, consider the required performance, availability, and capacity. Every
volume is assigned to an I/O group that defines which pair of system nodes services I/O
requests to the volume.

Note: No fixed relationship exists between I/O groups and storage pools.

When a host sends I/O to a volume, it can access the volume with either of the nodes in the
I/O group but each volume has a preferred node. Many of the multipathing driver
implementations that the system supports use this information to direct I/O to the preferred
node. The other node in the I/O group is used only if the preferred node is not accessible.

During volume creation, the system selects the node in the I/O group that has the fewest
volumes to be the preferred node. After the preferred node is chosen, it can be changed
manually, if required.

Strive to distribute volumes evenly across available I/O groups and nodes within the system.

For more information about volume types, see Chapter 6, “Volumes” on page 277.

2.11.1 Planning for image mode volumes


Use image mode volumes to present to hosts data that is written to the back-end storage
before it was virtualized. An image mode volume directly corresponds to the MDisk from
which it is created.

94 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Image mode volumes are a useful tool in storage migration and during system
implementation to a working environment.

2.11.2 Planning for fully allocated volumes


A fully allocated volume presents to mapped hosts the same capacity that the volume uses in
the storage pool. No data reduction is performed on a pool level. However, if a fully allocated
volume is provisioned from a pool with data reducing storage, such as self-compressing
drives (FCM drives), the data is still compressed on a drive level.

Fully allocated volumes provide the best performance because they do not cause I/O
amplification, and they require less CPU time compared to other volume types.

2.11.3 Planning for thin-provisioned volumes


A thin-provisioned volume presents a different capacity to mapped hosts than the capacity
that the volume uses in the storage pool. Space is not allocated on a thin-provisioned volume
if an incoming host write operation contains all zeros.

Using the thin-provisioned volume feature that is called zero detect, you can reclaim unused
allocated disk space (zeros) when you convert a fully allocated volume to a thin-provisioned
volume by using volume mirroring.

DRPs enhance capacity efficiency for thin-provisioned volumes by monitoring the host’s
capacity usage. When the host indicates that the capacity is no longer needed, the capacity is
released and can be reclaimed by the DRP to be redistributed automatically. Standard pools
cannot reclaim capacity.

Note: Avoid using thin-provisioned volumes on a data reducing back end like
self-compressing drives.

2.11.4 Planning for compressed volumes


With compressed volumes, data is compressed as it is written to disk, which saves more
space. When data is read to hosts, the data is decompressed.

Compression is available through data reduction support as part of the system. If you want
volumes to use compression as part of data reduction support, compressed volumes must
belong to DRPs.

If you use compressed volumes over a pool with self-compressing drives, the drive still
attempts compression because it cannot be disabled on the drive level. However, there is no
performance impact due to FCM compression.

Before implementing compressed volumes, perform data analysis to know your average
compression ratio and ensure that performance sizing was done for compression.

Note: If you use compressed volumes over FCM drives, the compression ratio on a drive
level must be assumed to be 1:1 to avoid array overprovisioning and running out of space.

Chapter 2. Planning 95
2.11.5 Planning for deduplicated volumes
Deduplication can be configured for volumes that use different capacity saving methods, such
as thin provisioning. Deduplicated volumes must be created in DRPs for added capacity
savings. Deduplication is a type of data reduction that eliminates duplicate copies of data.
Deduplication of user data occurs within a DRP and only between volumes or volume copies
that are marked as deduplicated.

With deduplication, the system identifies unique chunks of data that is called signatures to
determine whether new data is written to the storage. Deduplication is a hash-based solution,
which means chunks of data are compared to their signatures rather than to the data itself. If
the signature of the new data matches an existing signature that is stored on the system, then
the new data is replaced with a reference. The reference points to the stored data instead of
writing the data to storage. This process saves the capacity of the back-end storage by not
writing new data to storage, and it might improve the performance of read operations to data
that has an existing signature.

The same data pattern can occur many times, and deduplication decreases the amount of
data that must be stored on the system. A part of every hash-based deduplication solution is
a repository that supports looking up matches for incoming data. The system contains a
database that maps the signature of the data to the volume and its virtual address. If an
incoming write operation does not have a signature that is stored in the database, then a
duplicate is not detected and the incoming data is stored on back-end storage.

To maximize the space that is available for the database, the system distributes this
repository between all nodes in the I/O groups that contain deduplicated volumes. Each node
carries a distinct portion of the records that are stored in the database. If nodes are removed
or added to the system, the database is redistributed between the nodes to ensure full use of
the available memory.

Before implementing deduplication, perform data analysis to estimate deduplication savings


and make sure that system performance sizing was done for deduplication.

2.12 Host attachment planning


The system supports the attachment of a various host hardware types running different OSs
with FC SAN or IP SAN. For a list of instructions that is specific to your host setup, see IBM
Knowledge Center and expand Configuring → Host attachment.

2.12.1 Queue depth


Typically, hosts issue subsequent I/O requests to storage systems without waiting for the
completion of previous ones. The number of outstanding requests is called queue depth.
Sending multiple I/O requests in parallel (asynchronous I/O) provides significant performance
benefits compared to sending them one-by-one (synchronous I/O). However, if the number of
queued requests exceeds the maximum that is supported by the storage controller, you
experience performance degradation.

For more information about how to calculate correct host queue depth for your environment,
see IBM Knowledge Center and expand Configuring → Host attachment.

96 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2.12.2 Microsoft Offloaded Data Transfer
If your Windows hosts are configured to use Microsoft Offloaded Data Transfer (ODX) to
offload the copy workload to the storage controller, consider the benefits of this technology
against the extra load on the storage controllers. The benefits and effects of enabling ODX
are especially prominent in Microsoft Hyper-V environments with ODX enabled.

2.12.3 SAN boot support


The system supports SAN boot or startup for selected configurations of hosts running AIX,
Microsoft Windows, and other OSs. To check whether your configuration is supported for SAN
boot, see the SSIC.

2.12.4 Planning for large deployments


Each I/O group can have up to 512 host objects defined. This limit is the same whether hosts
are attached by using FC, iSCSI, or a combination of both. To allow more than 512 hosts to
access the storage, you must divide them into groups of 512 hosts or less and map each
group to a single I/O group only. With this approach, you can configure up to 2048 host
objects on a system with four I/O groups (eight nodes).

For best performance, split each host group into two sets. For each set, configure the
preferred access node for volumes that are presented to the host set to one of the I/O group
nodes. This approach helps to evenly distribute load between the I/O group nodes.

Note: A volume can be mapped only to a host that is associated with the I/O group to
which the volume belongs.

2.12.5 Planning for SCSI UNMAP


UNMAP is a set of SCSI primitives that hosts use to indicate to a SCSI target that space that
is allocated to a range of blocks on a target storage volume is no longer required. With this
command, the storage controller takes measures and optimizes the system so that the space
can be reused for other purposes.

The system supports end-to-end UNMAP compatibility, which means that a command that is
issued by a host is processed and sent to the back-end storage device.

UNMAP processing can be controlled with two separate settings:


򐂰 One setting enables hosts to send UNMAP commands,
򐂰 The other setting controls whether the system sends UNMAP commands to back-end
storage (drives and external controllers).

By default, both types are enabled, and it is a best practice to keep them enabled.

However, if the system has a virtualized slow back-end controller (for example, back-end
storage running nearline (NL) SAS drives for the lowest tier), UNMAP requests that are sent
from the host as large I/O chunks might overload the back end. Thorough planning is required
if you plan to virtualize slow back-end controllers.

Chapter 2. Planning 97
2.13 Planning copy services
IBM FlashSystem systems offer a set of copy services, such as IBM FlashCopy (snapshots)
and RC, in synchronous and asynchronous modes. For more information about copy
services, see Chapter 10, “Advanced Copy Services” on page 505.

2.13.1 FlashCopy guidelines


With the FlashCopy function, you can perform a point-in-time (PiT) copy of one or more
volumes. The FlashCopy function creates a PiT or time-zero (T0) copy of data that is stored
on a source volume to a target volume by using a Copy on Write (CoW) and Copy on Demand
mechanism.

While the FlashCopy operation is performed, the source volume is stopped briefly to initialize
the FlashCopy bitmap, and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately.

The FlashCopy function operates at the block level below the host OS and cache, so those
levels must be flushed by the OS for a FlashCopy copy to be consistent.

When you use the FlashCopy function, observe the following guidelines:
򐂰 Both the FlashCopy source and target volumes should use the same preferred node.
򐂰 If possible, keep the FlashCopy source and target volumes on separate storage pools.

For more information about planning for the FlashCopy function, see IBM FlashSystem 9200
and 9100 Best Practices and Performance Guidelines, SG24-8448.

2.13.2 Planning for Metro Mirror and Global Mirror


MM is a copy service that provides a continuous, synchronous mirror of one volume to a
second volume. The systems can be up to 300 kilometers (186.4 miles) apart. Because the
mirror is updated synchronously, no data is lost if the primary system becomes unavailable.
MM is typically used for DR purposes, where it is important to avoid any data loss.

GM is a copy service that is similar to MM, but copies data asynchronously. You do not have
to wait for the write to the secondary system to complete. For long distances, performance is
improved compared to MM. However, if a failure occurs, you might lose data.

GM uses one of two methods to replicate data. Multicycling GM is designed to replicate data
while adjusting for bandwidth constraints. It is appropriate for environments where it is
acceptable to lose a few minutes of data if a failure occurs.

For environments with higher bandwidth, non-cycling GM can be used so that less than a
second of data is lost if a failure occurs. GM also works well when sites are more than 300
kilometers (186.4 miles) apart.

When copy services are used, all components in the SAN must sustain the workload that is
generated by application hosts and the data replication workload. Otherwise, the system can
automatically stop copy services relationships to protect your application hosts from
increased response times.

98 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
While planning RC services, consider the following aspects:
򐂰 Copy services topology
One or more clusters can participate in a copy services relationship. One typical and
simple use case is DR, where one site is active and another performs only a DR function.
In such a case, the solution topology is simple, with one cluster per site and uniform
replication direction for all volumes. However, multiple other topologies are possible that
you can use to design a solution that optimally fits your set of requirements.
򐂰 GM versus MM
Decide which type of copy services you are going use. This decision should be
requirement-driven. With MM, you prevent any data loss during a system failure, but it has
more stringent requirements, especially regarding intercluster link bandwidth and latency,
and remote site storage performance. Also, MM incurs a performance penalty because
writes are not confirmed to host until a data reception confirmation is received from the
remote site.
With GM, you can relax constraints on the system requirements at the cost of using
asynchronous replication, which enables the remote site to lag behind the local site. The
choice of the replication type has major effects on all other aspects of the copy services
planning.
Using GM and MM between the same two clustered systems is supported. Also, the RC
type may be changed from one to another one.
For native IP replication, use the RC mode of Multicycling GM (or Global Mirror with
Change Volumes (GMCV).
򐂰 Intercluster link
The local and remote clusters can be connected by an FC or IP network. Each of the
technologies has its own requirements concerning supported distance, link speeds,
bandwidth, and vulnerability to frame or packet loss.
When planning the intercluster link, consider the peak performance that is required. This
consideration is especially important for MM configurations.
The bandwidth between sites must be sized to meet the peak workload requirements.
When planning the inter-site link, consider the initial sync and any future resync
workloads. It might be worthwhile to secure more link bandwidth for the initial data
synchronization.
If the link between the sites is configured with redundancy so that they can tolerate single
failures, you must size the link so that the bandwidth and latency requirements are met
even during single failure conditions.
When planning the inter-site link, note whether it is dedicated to the inter-cluster traffic or
is going to be used to carry any other data. Sharing the link with other traffic might affect
the link’s ability to provide the required bandwidth for data replication.
򐂰 Volumes and consistency groups
Determine whether volumes can be replicated independently. Some applications use
multiple volumes and require that the order of writes to these volumes is preserved in the
remote site. Notable examples of such applications are databases.
If an application requires that the write order is preserved for the set of volumes that it
uses, create a consistency group for these volumes.

Chapter 2. Planning 99
2.14 Data migration
Data migration is an important part of an implementation, so you must prepare a detailed data
migration plan. You might need to migrate your data for one of the following reasons:
򐂰 Redistribute a workload within a clustered system across back-end storage subsystems.
򐂰 Move a workload on to newly installed storage.
򐂰 Move a workload off old or failing storage ahead of decommissioning it.
򐂰 Move a workload to rebalance a changed load pattern.
򐂰 Migrate data from an older disk subsystem.
򐂰 Migrate data from one disk subsystem to another one.

Because multiple data migration methods are available, choose the method that best fits your
environment, OS platform, type of data, and the application’s service-level agreement (SLA).

Data migration methods can be divided into three classes:


򐂰 Based on the host OS, for example, by using the system’s logical volume manager (LVM)
򐂰 Based on specialized data migration software
򐂰 Based on the system data migration features

For more information about system data migration tools, see Chapter 8, “Storage migration”
on page 443 and Chapter 10, “Advanced Copy Services” on page 505.

With data migration, apply the following guidelines:


򐂰 Choose the data migration method that best fits your OS platform, type of data, and SLA.
򐂰 Choose where you want to place your data after migration in terms of the storage tier,
pools, and back-end storage.
򐂰 Check whether enough free space is available in the target storage pool.
򐂰 To minimize downtime during the migration, plan ahead of time all of the required
changes, including zoning, host definition, and volume mappings.
򐂰 Prepare a detailed operation plan so that you do not overlook anything at data migration
time. Especially for a large or critical data migration, have the plan peer-reviewed and
formally accepted by an appropriate technical design authority within your organization.
򐂰 Perform and verify a backup before you start any data migration.
򐂰 You might want to use the system as a data mover to migrate data from a non-virtualized
storage subsystem to another non-virtualized storage subsystem. In this case, you might
have to add checks that relate to the specific storage subsystem that you want to migrate.
Be careful when you are using slower disk subsystems for the secondary volumes for
high-performance primary volumes because the system’s cache might not be able to
buffer all the writes. Flushing cache writes to slower back-end storage might impact
performance of your hosts.
򐂰 Consider storage performance. The migration workload might be much higher than
expected during normal operations of the system. If there is already application data on
the system to which you are migrating, the application performance might suffer if the
system is overloaded. Consider using host or volume level throttles when performing
migration on a production environment.

100 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2.15 Performance monitoring with IBM Storage Insights
IBM Storage Insights is integral to monitoring and ensuring the continued availability of the
system.

Available at no additional charge, the cloud-based IBM Storage Insights product provides a
single dashboard that provides a clear view of all your IBM block storage. You can make
better decisions by seeing trends in performance and capacity.

With storage health information, you can focus on areas needing attention and when IBM
support is needed, IBM Storage Insights simplifies uploading logs, speeds resolution with
online configuration data, and provides an overview of open tickets all in one place.

IBM Storage Insights provides a unified view of IBM systems. By using it, you can see all of
your IBM storage inventory as a live event feed so that you know what is going on with your
storage.

IBM Storage Insights provides advanced customer service with an event filter that provides
the following functions:
򐂰 The ability for you and support to view support tickets and open and close them, and to
track trends.
򐂰 With the auto log collection capability, you can collect the logs and send them to IBM
before IBM Support starts looking into the problem. This feature can reduce the time to
solve the case by as much as 50%.

Figure 2-3 shows the architecture of the IBM Storage Insights application, the supported
products, and the three main teams who can benefit from the use of the tool.

Figure 2-3 IBM Storage Insights architecture

IBM Storage Insights provides a lightweight data collector that is deployed on a Linux,
Windows, or AIX server or a guest in a virtual machine (VM) (for example, a VMware guest).

The data collector streams performance, capacity, asset, and configuration metadata to your
IBM Cloud instance.

Chapter 2. Planning 101


The metadata flows in one direction, that is, from your data center to IBM Cloud over HTTPS.
In the IBM Cloud, your metadata is protected by physical, organizational, access, and security
controls. IBM Storage Insights is ISO/IEC 27001 Information Security Management certified.

Figure 2-4 shows the data flow from systems to the IBM Storage Insights cloud.

Figure 2-4 Data flow from the storage systems to the IBM Storage Insights cloud

Metadata about the configuration and operations of storage resources is collected, such as:
򐂰 Name, model, firmware, and type of storage system
򐂰 Inventory and configuration metadata for the storage system's resources, such as
volumes, pools, disks, and ports
򐂰 Capacity values, such as capacity, unassigned space, used space, and the compression
ratio.
򐂰 Performance metrics, such as read and write data rates, I/O rates, and response times

The application data that is stored on the storage systems cannot be accessed by the data
collector.

Access to the metadata that is collected is restricted to the following users:


򐂰 The customer who owns the dashboard.
򐂰 The administrators who are authorized to access the dashboard, such as the customer’s
operations team.
򐂰 The IBM Cloud team that is responsible for the day-to-day operation and maintenance of
IBM Cloud instances.
򐂰 IBM Support for investigating and closing service tickets.

For more information about IBM Storage Insights and to sign up and register for the free
service, see the following resources:
򐂰 Fact Sheet
򐂰 Demonstration
򐂰 Security Guide
򐂰 Registration

102 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For more information, see 13.12, “IBM Storage Insights monitoring” on page 841.

2.16 Configuration backup procedure


Save the configuration before and after any major configuration changes on the system.
Saving the configuration is a crucial part of management, and various methods can be
applied to back up your system configuration. A best practice is to implement an automatic
configuration backup by using the configuration backup command. Make sure that you save
the configuration to a host system that does not depend on the storage that is provisioned
from a system whose configuration is backed up.

For more information, see 13.4, “Configuration backup” on page 785.

Chapter 2. Planning 103


104 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3

Chapter 3. Initial configuration


This chapter describes the initial configuration of the IBM FlashSystem 9100, IBM
FlashSystem 9200, IBM FlashSystem 7200, IBM FlashSystem 5100, IBM Storwize V5100, or
IBM Storwize V7000 Gen3 systems. It provides step-by-step instructions about how to do the
initial setup and defines the base settings of the system, which are done during the
implementation phase before volumes are created and provisioned.

This chapter includes the following topics:


򐂰 3.1, “Prerequisites” on page 106
򐂰 3.2, “System initialization” on page 107
򐂰 3.3, “System setup” on page 110
򐂰 3.4, “Base configuration” on page 121
򐂰 3.5, “Configuring management access” on page 136

© Copyright IBM Corp. 2020. All rights reserved. 105


3.1 Prerequisites

Note: IBM FlashSystem 9100 and IBM FlashSystem 9200 are installed by an IBM System
Services Representative (IBM SSR). You must provide all the necessary information to the
IBM SSR by filling out the planning worksheets, which can be found in IBM Knowledge
Center by selecting Planning → Planning worksheets (customer task).

After the IBM SSR completes their task, continue the setup by following the instructions in
3.3, “System setup” on page 110.

Before initializing and setting up the system, ensure that the following prerequisites are met:
򐂰 The physical components fulfill all the requirements and are correctly installed, including:
– The control enclosures are physically installed in the racks.
– The Ethernet and Fibre Channel (FC) cables are connected.
– The expansion enclosures, if available, are physically installed and attached to the
control enclosures that will use them.
– The system control enclosures and optional expansion enclosures are powered on.
򐂰 The web browser that is used for managing the system is supported by the management
GUI. For the list of supported browsers, see IBM Knowledge Center.
򐂰 You have the required information, including:
– The IPv4 (or IPv6) addresses that are assigned for the system’s management
interfaces:
• The unique cluster IP address, which is the address that is used for the
management of the system.
• Unique service IP addresses, which are used to access node service interfaces.
You need one address for each node (two per control enclosure).
• The IP subnet mask for each subnet that is used.
• The IP gateway for each subnet that is used.
– The licenses that might be required to use particular functions (depending on the
system type):
• Remote Copy (RC).
• External virtualization.
• IBM FlashCopy.
• Compression.
• Encryption.
– Information that is used by a system when performing Call Home functions, such as:
• The company name and system installation address.
• The name, email address, and phone number of the storage administrator whom
IBM can contact if necessary.
– (optional) The Network Time Protocol (NTP) server IP address.

106 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
– (optional) The Simple Mail Transfer Protocol (SMTP) server IP address, which is
necessary only if you want to enable Call Home or want to be notified about system
events through email.
– (optional) The IP addresses for Remote Support Proxy Servers, which are required
only if you want to use them with the Remote Support Assistance feature.

3.2 System initialization


This section provides step-by-step instructions about how to create the system cluster.

To start the initialization procedure, connect a desktop PC or a notebook to the technician


port. The technician port is a dedicated 1 Gb Ethernet port at the rear of each of the nodes in
the control enclosure. It can be use only to initialize or service the system. It cannot be
connected to an Ethernet switch because it supports only a direct connection, and it remains
disconnected after the initial setup is done.

The location of the technician port is shown in Figure 3-1.

Figure 3-1 Location of the technician port

The technician port runs an IPv4 DHCP server and it can assign an address to any device
that is connected to this port. Ensure that your PC or notebook Ethernet adapter is configured
to use a DHCP client if you want the IP to be assigned automatically. If you prefer not to use
DHCP, you can set a static IP on the Ethernet port from the [Link]/24 subnet, for
example, [Link] with the netmask [Link].

The default IP address of a technician port on a node canister is [Link]. Do not use this
IP address for your PC or notebook.

Note: Ensure that the technician port is not connected to the organization’s network. No
Ethernet switches or hubs are supported on this port.

3.2.1 System initialization process


Before a system is initialized, each node canister of a new system remains in the candidate
state and cannot process I/O. During initialization, the nodes in one enclosure are joined in a
cluster, which is later configured to process data. If your systems have more than one control
enclosure, all the other ones except the first one must not be initialized. The remaining control
enclosures are added to the cluster by using a cluster management interface (GUI or CLI)
after the first one is set up.

You must specify IPv4 or an IPv6 system management addresses, which are assigned to
Ethernet port 1 on each node and used to access the management GUI and CLI. After the
system is initialized, you can specify other IP addresses.

Chapter 3. Initial configuration 107


Note: Do not perform the system initialization procedure on more than one node canister
of one control enclosure. After initialization completes, use the management GUI or CLI to
add control enclosures to the system.

To do the initialization of a new system, complete the following steps:


1. Connect your PC or notebook to a technician port of any canister of the control enclosure.
Ensure that you obtained a valid IPv4 address with DHCP.
2. Open a supported web browser and go to [Link] The browser is automatically
redirected to the System Initialization wizard. You can also use the IP address
[Link] if you are not automatically redirected.

Note: During the system initialization, you are prompted to accept untrusted certificates
because the system certificates are self-signed. If you are directly connected to the
service interface, there is no doubt about the identity of the certificate issuer, so you can
safely accept the certificates.

If the system is not in a state that allows initialization, you are redirected to the Service
Assistant interface. Use the displayed error codes to troubleshoot the problem.
3. The Welcome dialog box opens, as shown in Figure 3-2. Click Next to start the procedure.

Figure 3-2 System Initialization: Welcome dialog box

4. A window opens in which two options are presented, as shown in Figure 3-3 on page 109.
Select the first option and click Next.

108 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 3-3 System Initialization: Create a system or expand the existing one

If you select As an additional node in an existing system, you are prompted to


disconnect from the technician port and use the GUI of an existing system to add new
nodes.
5. Enter the management IP address information for the new system, as shown in
Figure 3-4. Set the IP address, network mask, and gateway.

Figure 3-4 System Initialization: Management IP

As you click Next, a command to create a cluster runs.

Chapter 3. Initial configuration 109


6. A window with restart timer opens. When the timeout is reached, you can click Next to see
the final initialization window, as shown in Figure 3-5. Follow the instructions, and browser
is redirected to the management IP address to access the system GUI after you click
Finish.

Figure 3-5 System Initialization: Complete

If you cannot connect to a network that has access to the management IP, you can
continue the system setup from any other workstation that can reach it.

3.3 System setup


This section provides instructions about how to define the basic settings of the system by
using the system setup wizard.

3.3.1 System setup wizard


After the initialization is complete and you are redirected to a management GUI from your PC
or notebook, or you browse to the management IP address of a freshly initialized system from
another workstation, you must complete the system setup wizard to define the basic settings
of the system.

110 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Experienced users can disable the system setup wizard and complete the
configuration manually. However, this method is not recommended for most use cases.

To disable the system setup wizard on a new system, run the following command:
chsystem -easysetup no

Note: During the setup wizard, you are prompted to change the default superuser
password. If the wizard is bypassed, the system blocks the configuration functions until it is
changed. All attempts at configuration return the following error:
CMMVC9473E The command failed because the superuser password must be changed
before the system can be configured

Note: All configuration settings that are done by using the system setup wizard can be
changed later by using the system GUI or CLI.

The first time that you connect to the management GUI, you are prompted to accept
untrusted certificates because the system certificates are self-signed. If your company policy
requests certificates that are signed by a trusted certificate authority (CA), you can install
them after you complete the system setup. For more information about how to perform this
task, see 3.5.1, “Configuring secure communications” on page 136.

To complete the system setup wizard, complete the following steps:


1. Log in to system GUI. Until the wizard is complete, you may use only superuser account,
as shown in Figure 3-6. Click Sign in.

Note: The default password for the superuser account is passw0rd (with the number
zero and not the capital letter O). The default password must be changed by using the
system setup wizard or after the first CLI login. The new password cannot be set to the
default one.

Figure 3-6 System Setup: Logging in for the first time

Chapter 3. Initial configuration 111


2. The welcome window opens, as shown in Figure 3-7. Verify the prerequisites and click
Next.

Figure 3-7 System Setup: Welcome

3. Carefully read the license agreement, select I agree with the terms in the license
agreement if you want to continue the setup, as shown in Figure 3-8, and click Next.

Figure 3-8 System Setup: License agreement

112 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4. Enter a new password for the superuser, as shown in Figure 3-9. A valid password is 6 -
64 characters and cannot begin or end with a space. Also, the password cannot be set to
match the default password. Click Apply and then Next.

Figure 3-9 System Setup: Changing the password for the superuser

Note: All configuration changes that are done with the system setup wizard are applied
immediately, including the password change.

5. Enter the name that you want to give the new system, as shown on Figure 3-10. Click
Apply and then Next.

Figure 3-10 System Setup: Setting the system name

Avoid using an underscore (_) in a system name. While permitted here, it is not allowed in
domain name server (DNS) shortnames and fully qualified domain names (FQDNs), so
such naming might cause confusion and access issues. The following characters can be
used: A - Z, a - z, 0 - 9, and - (hyphen).

Chapter 3. Initial configuration 113


6. Enter the number of licensed enclosures or licensed capacity for each function, as shown
on Figure 3-11.

Figure 3-11 System Setup: Setting the system licenses

The IBM FlashSystem 5100 system follows an enclosure-based licensing scheme that
allows the use of certain licensed functions on the number of enclosures (control and
expansion) that is indicated in the license.
IBM FlashSystem 7200, IBM FlashSystem 9100, and IBM FlashSystem 9200 systems use
differential and capacity-based licensing. For external virtualization, differential licensing
offers different pricing rates for different types of storage and is based on the number of
storage capacity units (SCUs) that are purchased. For other licensed functions, the
system supports capacity-based licensing.
Make sure that the numbers you enter here match the numbers in your license
authorization papers.
When done, click Apply and then Next.

Note: Encryption uses a key-based licensing scheme, and it is activated later in the
wizard.

7. Enter the date and time settings. In the example that is shown in Figure 3-12, the date and
time are set by using an NTP server. Generally, use an NTP server so that all of your
storage area network (SAN) and storage devices have a common time stamp. This
practice facilitates troubleshooting and prevents time stamp-related errors if you use a key
server as an encryption key provider.

Figure 3-12 System Setup: Setting the date and time

114 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If you choose to manually enter these settings, you are prompted to input the date, time,
and time zone, or you can take those settings from your web browser. You cannot use a
24-hour clock system here, but you can switch to it later by using the system GUI.
When the data is set, click Apply and then Next.
8. Select whether the encryption feature was purchased for this system, as shown in
Figure 3-13.

Figure 3-13 System Setup: Encryption activation

If encryption is not planned at this moment, select No and click Next. You can enable this
feature later, as described in Chapter 12, “Encryption” on page 701.
If you purchased the encryption feature, you are prompted to activate your license
manually or automatically. The encryption license is key-based and required for each
control enclosure.
You can use automatic activation if the PC or notebook that you use to connect to the GUI
and run the system setup wizard has internet access. If no internet connection is available,
use manual activation and follow the instructions. For more information, see Chapter 12,
“Encryption” on page 701.
After the encryption license is activated, you see a green check mark for each enclosure,
as shown in Figure 3-14. After all the control enclosures show that encryption is licensed,
click Next.

Figure 3-14 System Setup: Encryption licensed

Chapter 3. Initial configuration 115


9. Set up the Call Home functions, as shown in Figure 3-15. With Call Home enabled, IBM
automatically opens problem reports and contacts you to verify whether replacement parts
are required.

Figure 3-15 System Setup: Call Home methods

Note: It is a best practice to configure Call Home and keep it enabled if your system is
under warranty or if you have a hardware maintenance agreement.

On IBM FlashSystem 9100 and IBM FlashSystem 9200 systems, an IBM SSR configures
Call Home during installation. You need to check only whether all the entered data is
correct.
The system supports two methods of sending Call Home notifications to IBM:
– Cloud Call Home
– Call Home with email notifications.
Cloud Call Home is the default and preferred option for a system to report event
notifications to IBM Support. With this method, the system uses RESTful APIs to connect
to an IBM centralized file repository that contains troubleshooting information that is
gathered from customers. This method requires no extra configuration.
The system may also be configured to use email notifications for this purpose. If this
method is selected, you are prompted to enter the SMTP server IP address.
If both methods are enabled, cloud Call Home is used, and the email notifications method
is kept as a backup.
For more information about setting up Call Home, including Cloud Call Home, see
Chapter 13, “Reliability, availability, and serviceability, monitoring and logging, and
troubleshooting” on page 769.
If either of these methods is selected, the system location and contact information must be
entered. This information is used by IBM to provide technical support. All fields in the form
must be populated. In this step, the system also verifies that it can contact the Cloud Call
Home servers, as shown in Figure 3-16 on page 117.

116 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 3-16 System Setup: System location

After clicking Next, you can provide business-to-business contact information that IBM
Support uses to contact a person who manages this machine if it is necessary, as shown
in Figure 3-17.

Figure 3-17 System Setup: Contact information

If the Email notifications option was selected, you are prompted to enter the details for
the email servers to be used for Call Home. Figure 3-18 shows an example. You can click
Ping to verify that the email server is reachable over the network. Click Apply and then
Next.

Figure 3-18 System Setup: Email servers

Chapter 3. Initial configuration 117


[Link] FlashSystem family systems may be used with IBM Storage Insights, which is an
IBM cloud storage monitoring and management tool. During this setup phase, the system
tries to contact the IBM Storage Insights web service. If it is available, you are prompted to
sign up, as shown in Figure 3-19.

Figure 3-19 System Setup: IBM Storage Insights

If a connection cannot be established, you are prompted to add the system that you are
currently working on to the IBM Storage Insights setup manually, as shown in Figure 3-20.

Figure 3-20 System Setup: IBM Storage Insights

For more information about IBM Storage Insights, see Chapter 13, “Reliability, availability,
and serviceability, monitoring and logging, and troubleshooting” on page 769.
[Link] you click Next, if you enabled at least one Call Home method, the Support
Assistance configuration window opens, as shown in Figure 3-21 on page 119. The
Support Assistance function requires Call Home, so if it is disabled, Support Assistance
cannot be used.

118 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 3-21 System Setup: Support Assistance

With the Support Assistance feature, you allow IBM Support to perform maintenance
tasks on your system while an IBM SSR is onsite. The IBM SSR can log in locally with
your permission and a special user ID and password so that a superuser password does
not need to be shared with the IBM SSR.
You can also enable Support Assistance with remote support to allow IBM Support
personnel to log in remotely to the machine with your permission through a secure tunnel
over the internet.
For more information about the Support Assistance feature, see Chapter 13, “Reliability,
availability, and serviceability, monitoring and logging, and troubleshooting” on page 769.
If you allow remote support, you are given the IP addresses and ports of the remote
support centers and an opportunity to provide proxy server details (if required) to allow the
connectivity, as shown in Figure 3-22. Also, you can allow remote connectivity at any time
or only after obtaining permission from the storage administrator.

Figure 3-22 System Setup: Support Centers

[Link] the last initial system setup step, you are prompted to perform automatic configuration
for the system that you will use as FC-attached back-end storage for IBM SAN Volume
Controller.

Chapter 3. Initial configuration 119


If you plan to use the system in stand-alone mode (not behind an SAN Volume Controller),
leave Automatic Configuration turned off, as shown in Figure 3-23. If your solution
design later changes and the system becomes an SAN Volume Controller back end, you
can run automatic configuration later by using the GUI.
If you turn on automatic configuration, after the system setup completes, the system
redirects you to the Automatic Configuration for Virtualization wizard, which is described in
3.4.6, “Automatic configuration for IBM SAN Volume Controller back-end storage” on
page 133.

Figure 3-23 System Setup: Automatic configuration for IBM SAN Volume Controller

[Link] the Summary page, the settings that were set by the system setup wizard are shown.
If corrections are needed, you may return to a previous step by clicking Back. Otherwise,
click Finish to be redirected to a system GUI.

After the wizard completes, your system consists only of the control enclosure that includes
the node canister that you used to initialize the system and its partner, and the expansion
enclosures that are attached to them. If you have other control and expansion enclosures, you
must add them to complete the system setup. For more information about how to add a
control or expansion enclosure, see 3.4.2, “Adding an enclosure” on page 124.

If you have no more enclosures to add to this system, the system setup process is complete.
All the mandatory steps of the initial configuration are done. If required, you can configure
other global functions, such as system topology, user authentication, or local port masking,
before configuring the volumes and provisioning them to hosts.

120 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3.4 Base configuration
Tasks that are listed in this section are used to define global system configuration settings.
Often, they are performed during system setup. However, they also can be performed any
time later, such as when the system is expanded or the system environment is reconfigured.

3.4.1 Configuring Remote Direct Memory Access clustering


Up to four control enclosures may be joined in an IBM HyperSwap or a standard topology
cluster. This subsection describes the configuration steps that must be performed if a system
is designed for IP-based Remote Direct Memory Access (RDMA) node-to-node traffic. For FC
SAN clustering, no special configuration is required on the system, but the SAN must be set
up as described in Chapter 2, “Planning” on page 77.

Prerequisites
Before RDMA clustering is configured, ensure that the following prerequisites are met:
򐂰 25 gigabits per second (Gbps) RDMA-capable Ethernet cards are installed in each node.
򐂰 RDMA-capable adapters in all nodes use the same technology, such as RDMA over
Converged Ethernet (RoCE) or internet Wide Area RDMA Protocol (iWARP).
򐂰 RDMA-capable adapters are installed in the same slots across all the nodes of the
system.
򐂰 Ethernet cables between each node are connected correctly.
򐂰 The network configuration does not contain more than two hops in the fabric of switches.
The router must not be placed between nodes that use RDMA-capable Ethernet ports for
node-to-node communication.
򐂰 The negotiated speeds on the local and remote adapters are the same.
򐂰 The local and remote port virtual local area network (VLAN) identifiers are the same. All
the ports that are used for node-to node communication must be assigned to one VLAN
ID, and ports that are used for host attachment must have a different VLAN ID. If you plan
to use VLAN to create this separation, you must configure VLAN support on the all the
Ethernet switches in your network before you define the RDMA-capable Ethernet ports on
nodes in the system. On each switch in your network, set the VLAN to Trunk mode and
specify the VLAN ID for the RDMA-ports that will be in the same VLAN.
򐂰 A minimum of two dedicated RDMA-capable Ethernet ports are required for node-to-node
communications to ensure best performance and reliability. These ports must be
configured for inter-node traffic only and must not be used for host attachment,
virtualization of Ethernet-attached external storage, or IP replication traffic.
򐂰 A maximum of four RDMA-capable Ethernet ports per node are allowed for node-to-node
communications.

Chapter 3. Initial configuration 121


Configuration process
To enable RDMA clustering, IP addresses must be configured on each port of each node that
is used for node-to-node communication. Complete the following steps:
1. Connect to a Service Assistant of a node by going to
[Link] and clicking Change Node IP, as shown in
Figure 3-24.

Figure 3-24 Node IP address setup for Remote Direct Memory Access clustering

Figure 3-24 shows that ports 1 - 4 do not show any RDMA type, so they cannot be used
for node-to-node traffic. Ports 5 and 6 show RDMA type RoCE, so they can be used.
2. Hover your cursor over a tile with a port and click Modify to set the IP address, netmask,
gateway address, and VLAN ID for a port. The IP address for each port must be unique
and cannot be used anywhere else on the system. The VLAN ID for ports that are used for
node-to-node traffic must be the same on all nodes. When the required information is
entered, click Save and verify that the operation completed successfully, as shown in
Figure 3-25. Repeat this step for all ports that you intend to use for node-to-node traffic,
with a minimum of two and a maximum of four ports per node.

Figure 3-25 Node IP addresses configured

122 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To list and change the node IP configuration by using the CLI, use sainfo lsnodeip and
satask chnodeip commands, as shown in Example 3-1.

Example 3-1 Setting the IP addresses for node-to-node connectivity


IBM_IBM FlashSystem:ITSO-FS9100:superuser>sainfo lsnodeip
port_id rdma_type port_speed vlan link_state state node_IP_address
1 10Gb/s active unconfigured
2 10Gb/s active unconfigured
3 10Gb/s active unconfigured
4 10Gb/s active unconfigured
5 RoCE 25Gb/s active unconfigured
6 RoCE 25Gb/s active unconfigured
IBM_IBM FlashSystem:ITSO-FS9100:superuser>satask chnodeip -ip [Link] -gw [Link] -mask
[Link] -port_id 5
IBM_IBM FlashSystem:ITSO-FS9100:superuser>satask chnodeip -ip [Link] -gw [Link]
-mask [Link] -port_id 6
IBM_IBM FlashSystem:ITSO-FS9100:superuser>sainfo lsnodeip
port_id rdma_type port_speed vlan link_state state node_IP_address
1 10Gb/s active unconfigured
2 10Gb/s active unconfigured
3 10Gb/s active unconfigured
4 10Gb/s active unconfigured
5 RoCE 25Gb/s active configured [Link]
6 RoCE 25Gb/s active configured [Link]

3. Some environments might not include a stretched layer 2 subnet. In such scenarios, a
layer 3 network such as in standard topologies or long-distance RDMA node-to-node
HyperSwap configurations is applicable. To support the layer 3 Ethernet network, use the
unicast discovery method for RDMA node-to-node communication. This method relies on
unicast-based fabric discovery rather than multicast discovery.
To configure unicast discovery, see the information about the satask
addnodediscoverysubnet, satask rmnodediscoverysubnet, or sainfo
lsnodediscoverysubnet commands in IBM Knowledge Center. You can also configure
discovery subnets by using the Service Assistant interface menu option Change Node
Discovery Subnet, as shown in Figure 3-26.

Figure 3-26 Setting the node discovery subnet

Chapter 3. Initial configuration 123


4. After the IP addresses are configured on all nodes in a system, run the sainfo
lsnodeipconnectivity command or use the Service Assistant GUI menu Ethernet
Connectivity to verify that the partner nodes are visible on the IP network, as shown in
Figure 3-27. If necessary, troubleshoot connection problems by running the ping and
sainfo traceroute commands.

Figure 3-27 Node-to-node Ethernet connectivity

When all the nodes that are joined to the cluster are connected, the enclosure may be added
to the cluster.

3.4.2 Adding an enclosure


This procedure is the same whether you are configuring the system for the first time or
expanding it later. When performed by using the system GUI, the same steps are used for
adding expansion or control enclosures.

Before beginning this process, ensure that the new control enclosure is correctly installed and
cabled to the existing system. For FC node-to-node communication, verify that correct the
SAN zoning is set. For node-to-node communication over RDMA-capable Ethernet ports,
ensure that the IP addresses are configured and a connection between nodes can be
established.

To add an enclosure to the system, complete the following steps:


1. In the GUI, select Monitoring → System. When a new enclosure is detected by a system,
the Add Enclosure button appears on the System - Overview window next to System
Actions, as shown in Figure 3-28.

Figure 3-28 Add Enclosure button

Note: If the Add Enclosure button does not appear, review the installation instructions
to verify that the new enclosure is connected and set up correctly.

124 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Click Add Enclosure, and a list of available candidate enclosures opens, as shown in
Figure 3-29. To light the Identify light-emitting diode (LED) on a selected enclosure, select
Actions → Identify. When the required enclosure (or enclosures) is chosen, click Next.

Figure 3-29 Selecting the control enclosure to add

3. Review the summary in the next window and click Finish to add the expansion enclosure
or the control enclosure and all expansions that are attached to it to the system.

Note: When a new control enclosure is added, the software version running on its
nodes is upgraded or downgraded to match the system software version. This process
can take up to 30 minutes or more, and the enclosure is added only when this process
completes.

4. After the control enclosure is successfully added to the system, a success message
appears. Click Close to return to the System Overview window and check that the new
enclosure is visible and available for management.

To perform the same procedure by using a CLI, complete the following steps. For more
information about the detailed syntax for each command, go to IBM Knowledge Center.
1. When adding control enclosures, check for unpopulated I/O groups by running lsiogrp.
Each control enclosure has two nodes, so it forms an I/O group. Example 3-2 shows that
only io_grp0 has nodes, so a new control enclosure can be added to io_grp1.

Example 3-2 Listing the I/O groups


IBM_IBM FlashSystem:ITSO-FS9100:superuser>lsiogrp
id name node_count vdisk_count host_count site_id site_name
0 io_grp0 2 0 0
1 io_grp1 0 0 0
2 io_grp2 0 0 0
3 io_grp3 0 0 0
4 recovery_io_grp 0 0 0

Chapter 3. Initial configuration 125


2. To list control enclosures that are available to addi, run the lscontrolenclosurecandidate
command, as shown in Example 3-3. To list the expansion enclosures, run the
lsenclosure command. Expansions that have the managed parameter set to no are
available for addition.

Example 3-3 Listing the candidate control enclosures


IBM_IBM FlashSystem:ITSO-FS9100:superuser>lscontrolenclosurecandidate
serial_number product_MTM machine_signature
78E005D 9848-AF8 4AD2-EA69-8B5E-D0C0

3. Add a control enclosure by running the addcontrolenclosure command, as shown in


Example 3-4. The command triggers only the process, which starts in background and
can take up to 30 minutes or more.

Example 3-4 Adding a control enclosure


IBM_IBM FlashSystem:ITSO-FS9100:superuser>addcontrolenclosure -iogrp 1 -sernum
78E005D

4. To add an expansion enclosure, change its status to managed = yes by running the
chenclosure command, as shown in Example 3-5.

Example 3-5 Adding an expansion enclosure


IBM_IBM FlashSystem:ITSO-FS9100:superuser>lsenclosure
id status type managed IO_group_id IO_group_name product_MTM serial_number
1 online control yes 0 io_grp0 9848-AF8 78E006A
2 online expansion no 0 io_grp0 9848-AFF 78CBVF5
IBM_IBM FlashSystem:ITSO-FS9100:superuser>chenclosure -managed yes 2

3.4.3 Changing the system topology to HyperSwap


The HyperSwap function is a high availability (HA) feature that provides dual-site,
active-active access to a volume. You can create an HyperSwap topology system
configuration where each I/O group in the system is physically on a different site. When these
configurations are used with HyperSwap volumes, they can be used to maintain access to
data on the system if site-wide outages occur.

If your solution is designed to use the HyperSwap function, use the guidance in this section to
configure a cluster for a multi-site HyperSwap topology.

For a list of requirements for a HyperSwap configuration with FC or RDMA-capable Ethernet


connections, see IBM Knowledge Center and expand Configuring → Configuration
details → HyperSwap system configuration details.

To change the system topology to HyperSwap, complete the following steps:


1. In the GUI, click Monitoring → System to open the System - Overview window. Click
System Actions and expand Modify System Topology, as shown in Figure 3-30.

Figure 3-30 Starting the Modify System Topology wizard

126 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. The Modify Topology wizard welcome window opens. Click Next. You are prompted to
change the default site names, as shown in Figure 3-31. The site names can indicate, for
example, building locations for each site, or other descriptive information.

Figure 3-31 Assigning site names

3. Assign I/O groups to sites. Click the marked icons in the center of the window to swap site
assignments, as shown in Figure 3-32. Click Next.

Figure 3-32 Specifying the system topology

4. If any host objects or back-end storage controllers are configured, you must assign a site
for each of them. Right-click the object and click Modify Site, as shown in Figure 3-33.

Figure 3-33 Assigning hosts to sites

Chapter 3. Initial configuration 127


5. Set the maximum background copy operations bandwidth between the sites. Background
copy is the initial synchronization and any subsequent resynchronization traffic for
HyperSwap volumes. Use this setting to limit the impact of volume synchronization to host
operations. You may also set it higher during the initial setup (when there are no host
operations on the volumes yet), and set it lower when the system is in production.
As shown in Figure 3-34, you must specify the total bandwidth between the sites in
megabits per second (Mbps) and what percentage of this bandwidth that can be used for
background copying. Click Next.

Figure 3-34 Setting the bandwidth between the sites

6. Review the summary and click Finish. The wizard starts implementing changes to migrate
the system to the HyperSwap solution.

When you later add a host or back-end storage controller objects, the GUI prompts you to set
an object site during the creation process.

3.4.4 Configuring quorum disks or applications


Quorum devices are required for a system to hold a copy of important system configuration
data. An internal drive of an IBM FlashCore Module (FCM), a managed disk (MDisk) from
FC-attached external back-end storage, or a special application that is connected over an IP
network may work as a quorum device.

One of these items is selected for the active quorum role, which is used to resolve failure
scenarios where half the nodes on the system become unavailable or a link between
enclosures is disrupted. The active quorum determines which nodes can continue processing
host operations and to avoid a “split brain” condition, which happens when both halves of the
system continue I/O processing independently of each other.

For systems with a single control enclosure, quorum devices are selected automatically. No
special configuration actions are required. This function also applies for systems with multiple
control enclosures, a standard topology, and virtualizing external storage.

For HyperSwap topology systems, an active quorum device must be on a third, independent
site. Due to the costs that are associated with deploying a separate FC-attached storage
device on a third site, an IP-based quorum device may be used for this purpose.

On a standard topology system with two or more control enclosures and no external storage,
an active quorum device cannot be on an internal drive of an FCM. For such configurations, it
is a best practice to deploy an IP-based quorum application.

128 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Creating and installing an IP quorum application
To create and install an IP quorum application, complete the following steps:
1. Select System → Settings → IP Quorum to download the IP quorum application, as
shown in Figure 3-35. If you are using IPv6 for management IP addresses, the Download
IPv6 Application button is available and the IPv4 option is disabled.

Figure 3-35 Download IPv4 quorum button

2. After you click Download..., a window opens, as shown in Figure 3-36. It provides an
option to create an IP application that is used for tie-breaking only, or an application that
can be used as a tie-breaker and to store recovery metadata.

Figure 3-36 Download IP quorum application window

An application that does not store recovery metadata requires less channel bandwidth for
a link between the system and the quorum app, which may be a decision-making factor for
using a multi-site HyperSwap system.
For a full list of IP quorum app requirements, see IBM Knowledge Center and expand
Configuring → Configuration details → Configuring quorum → IP quorum
application configuration.

Chapter 3. Initial configuration 129


3. After you click OK, the ip_quorum.jar file is created. Save the file and transfer it to a
supported AIX, Linux, or Windows host that can establish an IP connection to the service
IP address of each system node. Move it to a separate directory and start the app, as
shown in Example 3-6.

Example 3-6 Starting the IP quorum application on the Windows operating system
C:\IPQuorum>java -jar ip_quorum.jar
=== IP quorum ===
Name set to null.
Successfully parsed the configuration, found 4 nodes.
....

Note: Add the IP quorum application to the list of auto-started applications at each start
or restart or configure your operating system (OS) to run it as an auto-started service in
the background.

The IP quorum log file and recovery metadata are stored in the same directory with the
ip_quorum.jar file.
4. Check that the IP quorum application is successfully connected and running by verifying
its Online status by selecting System → Settings → IP Quorum, as shown in
Figure 3-37.

Figure 3-37 IP quorum application that is deployed and connected

Configuring the IP quorum mode


On a standard topology system, only the Standard quorum mode is supported. No additional
configuration is required. On a HyperSwap toplology, you may configure different tie-breaker
scenarios (a tie occurs when exactly half of the nodes that were previously a member of the
system are present):
򐂰 If the quorum mode is set to Standard, both sites have an equal chance to continue
working after the tie-breaker.
򐂰 If the quorum mode is set Preferred, during a disruption, the system delays processing
tie-breaker operations on non-preferred sites, leaving more time for the preferred site to
win. If during an extended period a preferred site cannot contact the IP quorum app (for
example, if it is destroyed), a non-preferred site continues working.
򐂰 If the quorum mode is set to Winner, the selected site always is the tie-breaker winner. If
the winner site is destroyed, the remaining site may continue operating only after manual
intervention.

The Preferred and Winner quorum modes are supported only with an IP quorum. For a
FC-attached active quorum MDisk, only Standard mode is possible.

130 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To set a quorum mode, select System → Settings → IP Quorum and click Quorum Setting.
The Quorum Setting window opens, as shown in Figure 3-38.

Figure 3-38 Changing the quorum mode

3.4.5 Configuring the local Fibre Channel port masking


With FC port masking, you control the usage of FC ports. By applying a mask, you restrict
node-to-node communication or FC RC traffic on selected ports.

To decide whether your system must have port masks configured, see 2.6.8, “Port
designation recommendations” on page 85.

To set the FC port mask by using the GUI, complete the following steps:
1. Select System → Network → Fibre Channel Ports. In a displayed list of FC ports, the
ports are grouped by a system port ID. Each port is configured identically across all nodes
in the system. You can click the arrow next to the port ID to expand a list and see which
node ports (N_Port) belong to the selected system port ID and their worldwide port names
(WWPNs).
2. Right-click a system port ID that you want to change and select Modify Connection, as
shown in Figure 3-39.

Figure 3-39 Applying a port mask by using a GUI

By default, all system ports can send and receive traffic of any kind:
򐂰 Host traffic
򐂰 Traffic to virtualized back-end storage systems
򐂰 Local system traffic (node-to-node)
򐂰 Partner system (remote replication) traffic

Chapter 3. Initial configuration 131


The first two types are always allowed, and you may control them only with SAN zoning. The
other two types can be blocked by port masking. In the Modify Connection dialog box, as
shown in Figure 3-40, you can choose which type of traffic that a port can send.

Figure 3-40 Modify Connection dialog box

Any A port can work with all four types.


Local Remote replication traffic is blocked on this port.
Remote Blocks local node-to-node traffic.
None Both local and remote systems traffic, but there is still system to host
and system to back-end storage communication.

Port masks can also be set by using the CLI. Local and remote partner port masks are
internally represented as a string of zeros and ones. The last digit in the string represents port
one. The previous digits represent ports two, three, and so on. If the digit for a port is set to
“1”, the port is enabled for the specific type of communication. If it is set to “0”, the system
does not send or receive traffic that is controlled by a mask on the port.

To view the current port mask settings, run the lssystem command, as shown in Example 3-7.
The output shows that all system ports allow all kinds of traffic.

Example 3-7 Viewing the local port mask


IBM_IBM FlashSystem:ITSO-FS9100:superuser>lssystem |grep mask
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111111

To set the local or remote port mask, run the chsystem command. Example 3-8 shows the
mask setting for a system with four FC ports on each node and that has RC relationships.
Masks are applied to allow local node-to-node traffic on ports 1 and 2, and replication traffic
on ports 3 and 4.

Example 3-8 Setting a local port mask by running the chsystem command
IBM_IBM FlashSystem:ITSO-FS9100:superuser>chsystem -localfcportmask 0011
IBM_IBM FlashSystem:ITSO-FS9100:superuser>chsystem -partnerfcportmask 1100
IBM_IBM FlashSystem:ITSO-FS9100:superuser>lssystem |grep mask
local_fc_port_mask 0000000000000000000000000000000000000000000000000000000000000011
partner_fc_port_mask 0000000000000000000000000000000000000000000000000000000000001100

132 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The mask is extended with zeros, and all ports that are not explicitly set in a mask have the
selected type of traffic blocked.

Note: When replacing or upgrading your node hardware, consider that the number of FC
ports and their arrangement might be changed. If so, make sure that any configured port
masks are still valid for the new configuration.

3.4.6 Automatic configuration for IBM SAN Volume Controller back-end


storage
If a system is supposed to work as FC-attached back-end storage for SAN Volume Controller,
you can enable Automatic Configuration for Virtualization during the initial system setup or
any time later by selecting Settings → System → Automatic Configuration.

Automatic Configuration for Virtualization is intended for a new system. If there are host, pool,
or volume objects that are configured, all the user data must be migrated out of the system,
and those objects must be deleted.

The Automatic Configuration for Virtualization wizard starts immediately after you complete
the initial setup wizard if you set Automatic Configuration to On. The following steps are
performed by it:
1. Add control or expansion enclosures to the system that are not added yet. Click Add
Enclosure to start the adding process, or click Skip to move to the next step. You can turn
off the Automatic Configuration for Virtualization wizard at any step by clicking the ...
(hamburger) symbol in the upper right corner, as shown in Figure 3-41.

Figure 3-41 Automatic Configuration wizard: Add Enclosure

2. The wizard checks whether the SAN Volume Controller is correctly zoned to the system.
By default, newly installed systems run in N_Port ID Virtualization (NPIV) mode (Target
Port Mode). The system’s virtual (host) WWPNs must be zoned for SAN Volume
Controller. On the SAN Volume Controller side, physical WWPNs must be zoned to a
back-end system independently of the NPIV mode setting.
3. Create a host cluster object for SAN Volume Controller. Each SAN Volume Controller node
has its own worldwide node name (WWNN). Make sure to select all WWNNs that belong
to nodes of the same SAN Volume Controller cluster.

Chapter 3. Initial configuration 133


Figure 3-42 shows that the system detects a SAN Volume Controller cluster with a single
I/O group, so two WWNNs are selected.

Figure 3-42 Defining a host cluster

4. When all nodes of an SAN Volume Controller cluster including the spare one are selected,
you can change the host object name for each one, as shown in Figure 3-43. For
convenience, name the host objects to match the SAN Volume Controller node names or
serial numbers.

Figure 3-43 Hosts inside an IBM SAN Volume Controller host cluster

5. Click Automatic Configuration and check the list of internal resources that are used.
Click Cancel if the list is not correct; otherwise, click Next.

134 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6. If the system uses compressed drives (FCM drives), you are prompted to enter your
expected compression ratio (or total capacity that will be provisioned to SAN Volume
Controller), as shown in Figure 3-44. If SAN Volume Controller is using encryption or
writes data that is not compressible, set the ratio to 1:1.

Figure 3-44 Automatic pool configuration

7. Review the pool (or pools) configuration, as shown in Figure 3-45, and click Proceed to
trigger commands that will apply it.

Figure 3-45 Pools configuration

Chapter 3. Initial configuration 135


8. When the Automatic Configuration for Virtualization wizard completes, you see the
window that is shown in Figure 3-46. After clicking Close, you may proceed to the SAN
Volume Controller GUI and configure a new provisioned storage.
You can export the system volume configuration data in .csv format by using this window
or any time later by selecting Settings → System → Automatic Configuration.

Figure 3-46 Automatic configuration complete

3.5 Configuring management access


The system can be managed by using the GUI and the CLI. Access to the system
management interfaces require user authentication. User authentication and the secure
communication implementation steps are described in this section.

3.5.1 Configuring secure communications


During system initialization, a self-signed Secure Sockets Layer (SSL) certificate is
automatically generated by the system to encrypt communications between the browser and
the system. Self-signed certificates generate web browser security warnings and might not
comply with organizational security guidelines.

Signed SSL certificates are issued by a trusted CA. A browser maintains a list of trusted CAs
that are identified by their root certificate. The root certificate must be included in this list in
order for the signed certificate to be trusted.

136 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To see the details of your system certificate, select Settings → Security and click Secure
Communications, as shown in Figure 3-47, or run the lssystemcert command.

Figure 3-47 Accessing the Secure Communications window

Based on the security requirements for your system, you can create either a new self-signed
certificate or install a signed certificate that is created by a third-party CA.

Generating a self-signed certificate


If a self-signed certificate is expired or its key type does not comply with your company’s
security policy, you can regenerate it. To renew a self-signed certificate, complete the
following steps:
1. Select Update Certificate on the Secure Communications window, as shown in
Figure 3-47.
2. Select Self-signed certificate and enter the details for the new certificate. The “Key type”
and “Validity days” are the only mandatory fields.

Note: Before re-creating a self-signed certificate, ensure that your browser supports
the type of keys that you are going to use for a certificate. See your organization’s
security policy to ensure what key type is required.

3. Click Update.

You are prompted to confirm the action. Click Yes to proceed. Close the browser, wait
approximately 2 minutes, and reconnect to the management GUI.

To regenerate an SSL certificate by using a CLI, run the chsystemcert command, as shown in
Example 3-9. Valid values for -keytype are rsa2048, ecdsa384, or ecdsa521.

Example 3-9 Regenerating a self-signed certificate


IBM_IBM FlashSystem:ITSO-FS9100:superuser>chsystemcert -mkselfsigned -keytype
ecdsa521 -validity 365

Chapter 3. Initial configuration 137


Configuring a signed certificate
If you company’s security policy requests certificates to be signed by a trusted authority,
complete the following steps to configure a signed certificate:
1. Select Update Certificate in the Secure Communications window.
2. Select Signed certificate and enter the details for the new certificate signing request, as
shown in Figure 3-48. All fields are mandatory except for the Subject Alternative Name.
For the “Country” field, use a two-letter country code. Click Generate Request.

Figure 3-48 Generating a certificate request

3. When prompted, save the [Link] file that contains the certificate signing
request.
Until the signed certificate is installed, the Secure Communications window shows that an
outstanding certificate request exists.

138 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Attention: If you must update a field in the certificate request, generate a new request
and submit it for signing by the proper CA. However, this process invalidates the
previous certificate request and prevents the installation of the signed certificate that is
associated with the original request.

4. Submit the request to the CA to receive a signed certificate. Notify the CA that you need a
certificate (or certificate chain) in base64-encoded Privacy Enhanced Mail (PEM) format.
5. When you receive the signed certificate, select Update Certificate in the Secure
Communications window again.
6. Select Signed Certificate and click the folder icon next to the Signed Certificate input
field of the Update Certificate window, as shown in Figure 3-48 on page 138. Click
Update.
7. You are prompted to confirm the action. Click Yes to proceed. After your certificate is
installed, the GUI session disconnects. Close the browser window and wait approximately
2 minutes before reconnecting to the management GUI.
8. Reconnect to the GUI and select Settings → Security → Secure Communications. The
window that opens should show that you are using a signed certificate, as shown in
Figure 3-49.

Figure 3-49 Signed certificate installed

3.5.2 Configuring user authentication


There are two methods of user authentication to control access to the GUI and to the CLI:
򐂰 Local user authentication is performed within the system. GUI users authenticate with
user name and password. CLI users must provide a user name and a Secure Shell (SSH)
public key or a password.
򐂰 Remote user authentication allows users to authenticate to the system by using
credentials that are stored on an external authentication service. With this feature, you use
user credentials and user groups that are defined on the remote service to simplify user
management and access, enforce password policies more efficiently, and separate user
management from storage management.

Locally administered users can coexist with remote authentication.

User roles and groups


User groups are used to determine what tasks the user is authorized to perform. Each user
group is associated with a single role. Roles apply to both local and remote users on the
system and are based on the user group to which the user belongs. A local user can belong
only to a single group, so the role of a local user is defined by the single group to which that
user belongs.

For a list of user roles and their tasks, and a description of a pre-configured user group, see
IBM Knowledge Center and expand Product overview → Technical overview → User
roles.

Chapter 3. Initial configuration 139


Superuser account
Every system has a default user that is called the superuser. It cannot be deleted or modified,
except for changing the password and SSH key. The superuser is a local user and cannot be
authenticated remotely. The superuser has a SecurityAdmin user role, which has the most
privileges within the system.

Note: The superuser is the only user that may log in to the Service Assistant interface. It is
also the only user that may run sainfo and satask commands through the CLI.

The password for superuser is set during the system setup. The superuser password can be
reset to its default value of passw0rd by using a procedure that is described in IBM Knowledge
Center by expanding Troubleshooting → Resolving a problem → Procedure: Resetting
the superuser password.

Note: The superuser password reset procedure uses system internal USB ports. The
system may be configured to disable those ports. If the USB ports are disabled and there
are no users with the SecurityAdmin role and a known password, the superuser password
cannot be reset without replacing the system hardware and deleting the system
configuration.

Local authentication
A local user is a user whose account is managed entirely on the system. A local user belongs
to one user group only, and it must have a password, an SSH public key, or both. Each user
has a name, which must be unique across all users in one system.

User names can contain up to 256 printable American Standard Code for Information
Interchange (ASCII) characters. Forbidden characters are the single quotation mark ('), colon
(:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (“). A user name
cannot begin or end with a blank space.

Passwords for local users can be up to 64 printable ASCII characters, but cannot begin or end
with a space.

When connecting to the CLI, encryption key authentication is attempted first with the user
name and password combination available as a fallback. The SSH key authentication method
is available for CLI and file transfer access only. For GUI access, only the password is used.

To add a user that is authenticated without a password by using only an SSH key, select
Access → Users by Group, click Add user, and then click Browse to select the SSH public
key for that user, as shown in Figure 3-50 on page 141. The Password field may be left blank.
The system accepts public keys that are generated by PuTTY (SSH2), OpenSSH, and RFC
4716-compliant keys that are generated by other clients.

140 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 3-50 Creating a user that is authenticated by an SSH key

If local authentication is used, user accounts must be created for each system. If you want
access for a user on multiple systems, you must define the user in each system.

Remote authentication
A remote user is authenticated by using identity information that is accessible by using the
Lightweight Directory Access Protocol (LDAP). The LDAP server must be available for the
users to log in to the system. Remote users have their groups defined by the remote
authentication service.

Users that are authenticated by an LDAP server can log in to the management GUI and the
CLI. These users do not need to be configured locally for CLI access, and they do not need
an SSH key that is configured to log in by using the CLI.

If multiple LDAP servers are available, you can configure more than one LDAP server to
improve resiliency. Authentication requests are processed by those LDAP servers that are
marked as preferred unless the connection fails or a user is not found. Requests are
distributed across all preferred servers for load balancing in a round-robin fashion.

Note: All LDAP servers that are configured within the same system must be of the same
type.

Chapter 3. Initial configuration 141


If users that are part of a group on the LDAP server are to be authenticated remotely, a user
group with an identical name must exist on the system. The user group name is
case-sensitive. The user group must also be enabled for remote authentication on the system.

A user who is authenticated remotely is granted permissions according to the role that is
assigned to the user group of which the user is a member.

To configure remote authentication by using LDAP, start by enabling remote authentication by


completing the following steps:
1. Select Settings → Security, click Remote Authentication, and then click Configure
Remote Authentication, as shown in Figure 3-51.

Figure 3-51 Configuring remote authentication

2. Enter the LDAP settings. These settings are not server-specific. They are applied to all
LDAP servers that are configured in the system. Extra optional settings are available by
clicking Advanced Settings, as shown in Figure 3-52.

Figure 3-52 Configure Remote Authentication settings

142 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The following settings are available:
– LDAP type:
• IBM Security Directory Server (for IBM Security Directory Server).
• Microsoft Active Directory (AD).
• Other (other LDAP v3-capable directory servers, for example, OpenLDAP).
– Security:
• LDAP with StartTLS: Select this option to use the StartTLS extension (RFC 2830).
It works by establishing a non-encrypted connection with an LDAP server on a
standard LDAP port (389), and then performing a TLS handshake over an existing
connection.
• LDAPS: Select to use LDAP over SSL and establish secure connections by using
port 636.
• None: Select to transport data in clear text format without encryption.
– Service Credentials: Sets a user name and password for administrative binding (the
credentials of a user that has the authority to query the LDAP directory). Leave it empty
if your LDAP server is configured to support anonymous bind.
For AD, a user name must be in User Principal Name (UPN) format.
– Advanced settings:
Speak to the administrator of the LDAP server to ensure that these fields are
completed correctly:
• User Attribute
This LDAP attribute is used to determine the user name of remote users. The
attribute must exist in your LDAP schema and must be unique for each of your
users.
This advanced setting defaults to sAMAaccountName for AD and to uid for IBM
Security Directory Server and Other.
• Group Attribute
This LDAP attribute is used to determine the user group memberships of remote
users. The attribute must contain either the distinguished name of a group or a
colon-separated list of group names.
This advanced setting defaults to memberOf for AD and Other, and to ibm-allGroups
for IBM Security Directory Server. For Other LDAP type implementations, you
might need to configure the memberOf overlay if it is not in place.
• Audit Log Attribute
This LDAP is an attribute that is used to determine the identity of remote users.
When an LDAP user performs an audited action, this identity is recorded in the
audit log. This advanced setting defaults to userPrincipalName for AD and to uid for
IBM Security Directory Server and the Other type.

Chapter 3. Initial configuration 143


3. Enter the server settings for one or more LDAP servers, as shown in Figure 3-53.

Figure 3-53 Configure Remote Authentication: Creating an LDAP server

The following settings are available:


– Preferred
One or more configured LDAP servers may be marked as Preferred. Requests are
distributed among these servers, and use only non-preferred servers if all the preferred
servers failed.
– IP Address
The IP address of the server.
– Base DN
The distinguished name to use as a starting point for searching for users on the server
(for example, dc=itso,dc=ibm,dc=com).
– SSL Certificate
The SSL certificate that is used to securely connect to the LDAP server. This certificate
is required only if you chose to use SSL or Transport Layer Security as a security
method earlier.
Click Browse to select a server certificate. The system accepts certificates in base-64
encoded PEM format. To get a certificate in PEM format from your AD server, select
Base-64 Encoded X.509 (.CER) in the MS Windows Certificate Export wizard, as
shown in Figure 3-54 on page 145.

144 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
.

Figure 3-54 Exporting the AD certificate

Note: If your organization is using a tiered CA hierarchy, a server certificate that is


exported for use on a system must include all the certificates in a chain. To
accomplish this task, export the certificate in MS Windows in .P7B format and use
third-party tools (OpenSSL) to convert it to PEM format. Otherwise, the exported
certificate will not contain all certificates in the certification path.

If you set a certificate and you want to remove it, click the red cross next to
Configured.
– Click the plus and minus signs to add or remove LDAP server records. You may define
up to six servers.
Click Finish to save the settings.
4. To verify that LDAP is enabled, select Settings → Security → Remote Authentication,
as shown in Figure 3-55. You may also test the server connection by selecting Global
Actions → Test LDAP connections and verifying that all servers return “CMMVC7075I
The LDAP task completed successfully”.

Figure 3-55 Verifying that LDAP is enabled

You can use the Global Actions menu to disable remote authentication and switch to
local authentication only.

Chapter 3. Initial configuration 145


After remote authentication is enabled, the remote user groups must be configured. You can
use the default built-in user groups for remote authentication. However, the name of the
default user groups cannot be changed. If the LDAP server contains a group that you want to
use and you do not want to create this group on the storage system, the name of the group
must be changed on the server side to match the default name. Any user group, whether
default or self-defined, must be enabled for remote authentication before LDAP authentication
can be used for that group.

To create a user group with remote authentication enabled, complete the following steps:
1. Select Access → Users by Group and click Create User Group. Enter the name for the
new group, select the LDAP check box, and choose a role for the users in the group, as
shown in Figure 3-56.

Figure 3-56 Creating a user group with remote authentication enabled

To enable LDAP for one of the existing groups, select it in the list, select User Group
Actions → Properties in the upper right corner, and select the LDAP check box.
2. When you have at least one user group that is enabled for remote authentication, verify
that you set up your user group on the LDAP server correctly by checking whether the
following conditions are true:
– The name of the user group on the LDAP server matches the one that you modified or
created on the storage system.

Note: The user group name is case-sensitive.

– Each user that you want to authenticate remotely is a member of the LDAP user group
that is configured for the system role.
3. To test the user authentication, select Settings → Security → Remote Authentication,
and then select Global Actions → Test LDAP Authentication (for an example, see
Figure 3-55 on page 145). Enter the user credentials of a user that is defined on the LDAP
server and click Test. A successful test returns the message “CMMVC70751 The LDAP
task completed successfully”.

A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the UPN format (user@domain).

146 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4

Chapter 4. IBM Spectrum Virtualize GUI


This chapter describes an overview of the IBM Spectrum Virtualize GUI. The management
GUI is a tool that is enabled and provided by IBM Spectrum Virtualize that helps you to
monitor, manage, and configure your system.

This chapter explains the basic view and the configuration procedures that are required to get
your system environment running as quickly as possible by using the GUI. This chapter does
not describe advanced troubleshooting or problem determination and some of the complex
operations (compression and encryption). For more information, see Chapter 13, “Reliability,
availability, and serviceability, monitoring and logging, and troubleshooting” on page 769.

Throughout this chapter, all GUI menu items are introduced in a systematic, logical order as
they appear in the GUI. However, topics that are described more in detail in other chapters of
the book are only referred to here. For example, storage pools (Chapter 5, “Storage pools” on
page 221), volumes (Chapter 6, “Volumes” on page 277), hosts (Chapter 7, “Hosts” on
page 369), and Copy Services (Chapter 10, “Advanced Copy Services” on page 505) are
described in separate chapters.

Demonstration: The IBM Client Demonstration Center includes a demonstration of the


Version 8.2 GUI. For more information, see the IBM Client Demonstration Center (log in
required).

This chapter includes the following topics:


򐂰 4.1, “Performing operations by using the GUI” on page 148
򐂰 4.2, “GUI introduction” on page 152
򐂰 4.3, “System - Overview window” on page 158
򐂰 4.4, “Monitoring menu” on page 162
򐂰 4.5, “Pools” on page 171
򐂰 4.6, “Volumes” on page 172
򐂰 4.7, “Hosts” on page 172
򐂰 4.8, “Copy Services” on page 173
򐂰 4.9, “Access” on page 174
򐂰 4.10, “Settings” on page 185
򐂰 4.11, “Additional frequent tasks in the GUI” on page 213

© Copyright IBM Corp. 2020. All rights reserved. 147


4.1 Performing operations by using the GUI
This section describes useful tasks that use the GUI that help administrators to manage,
monitor, and configure the system as quickly as possible. For the example in this book, we
configure the system in a standard topology.

The GUI is a built-in software component within the IBM Spectrum Virtualize Software.
Multiple users can be logged in to the GUI. However, no locking mechanism exists, so be
aware that if two users change the same object simultaneously, the last action that is entered
from the GUI is the action that takes effect.
I

Important: Data entries that are made through the GUI are case-sensitive.

You must enable Java Script in your browser. For Mozilla Firefox, JavaScript is enabled by
default and requires no other configuration steps.

4.1.1 Accessing the GUI


To access the IBM GUI, enter the Internet Protocol (IP) address that was set during the initial
setup process into your web browser. You can connect from any workstation that can
communicate with the system. The login window opens (see Figure 4-1).

Figure 4-1 Login window of the GUI

148 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: If you log in to the GUI by using the configuration node, you receive another option:
Service Assistant Tool (SAT). Clicking this option takes you to the service assistant instead
of the cluster GUI, as shown in Figure 4-2 on page 149.

Figure 4-2 Login window of the storage system when it is connected to the configuration node

It is a best practice for each user to have their own unique account. The default user accounts
should be disabled for use or their passwords changed and kept secured for emergency
purposes only. This approach helps to identify personnel working on the systems and track all
important changes that are done by them. The superuser account should be used for initial
configuration only.

Chapter 4. IBM Spectrum Virtualize GUI 149


After a successful login, the Version 8.3 Welcome window opens and displays the new
system dashboard (see Figure 4-3).

Figure 4-3 Welcome page with new dashboard

The Dashboard is divided into three sections:


򐂰 Performance
This section provides important information about latency, bandwidth, input/output
operations per second (IOPS), and CPU utilization. All this information can be viewed at
the system or canister levels. A “Node comparison” view shows the differences in
characteristics of each node (see Figure 4-4). The performance graph is updated with new
data every 5 seconds.

Figure 4-4 Performance statistics

򐂰 Capacity
This section shows the current utilization of attached storage and its usage. Apart from the
usable capacity, it also shows provisioned capacity and capacity savings. You can select
the Compressed Volumes, Deduplicated Volumes, or Thin Provisioned Volumes
options to display a complete list of the options in the Volumes tab. New with Version 8.2 is
the “Overprovisioned External Storage” section (highlighted in the red box in Figure 4-5 on
page 151), which appears only when attached storage systems that are overprovisioned
are present.

150 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-5 Capacity overview

Selecting this option provides a list of Overprovisioned External Systems that you can
then click to see a list of related managed disks (MDisks) and pools, as shown in
Figure 4-6.

click to display MDisks for this system

MDisks on the selected external system

Figure 4-6 List showing overprovisioned external storage

You also see a warning when assigning MDisks to pools if the MDisk is on an
overprovisioned external storage controller.
򐂰 System Health
This section indicates the status of all critical system components, which are grouped in
three categories: Hardware, logical, and connectivity components, as shown in Figure 4-7.
When you click Expand, each component is listed as a subgroup. You can then go directly
to the section of GUI where the component that you are interested in is managed.

Figure 4-7 System Health overview window

Chapter 4. IBM Spectrum Virtualize GUI 151


Figure 4-8 shows the expanded view.

Figure 4-8 Expanded System health view

The dashboard in Version 8.3.1 displays as a welcome page instead of the system pane as in
previous versions. This system overview was moved to the Monitoring → System menu.

Although the Dashboard pane provides key information about system behavior, the System
menu is a preferred starting point to obtain the necessary details about your system
components.

4.2 GUI introduction


As shown in Figure 4-9, the former GUI System pane was moved to Monitoring → System.

Task menu with functions Errors,warnings


Running jobs

Component
Details

Dynamic System View Performance indicator

Figure 4-9 IBM Storage System window

4.2.1 Task menu


The IBM Spectrum Virtualize GUI task menu is always available on the left side of the GUI
window. To browse by using this menu, click the action and choose a task that you want to
display, as shown in Figure 4-10 on page 153.

152 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-10 The task menu on the left side of the GUI

By reducing the horizontal size of your browser window, the wide task menu shrinks to the
icons only.

4.2.2 Suggested tasks


After the initial configuration process is complete, IBM Spectrum Virtualize shows the
information about suggested tasks that notify the administrator that several key functions are
not yet configured. If necessary, this indicator can be closed and these tasks can be
performed at any time.

Chapter 4. IBM Spectrum Virtualize GUI 153


Figure 4-11 shows the suggested tasks in the System pane.

Figure 4-11 Suggested tasks

In this case, the GUI has two suggested tasks that help with the general administration of the
system: You can directly perform the tasks from this window, or cancel them and run the
procedure later. Other suggested tasks that typically appear after the initial system
configuration are to create a volume and configure a storage pool.

The dynamic IBM Spectrum Virtualize menu contains the following panes:
򐂰 Dashboard
򐂰 Monitoring
򐂰 Pools
򐂰 Volumes
򐂰 Hosts
򐂰 Copy Services
򐂰 Access
򐂰 Settings

4.2.3 Notification icons and help


Three notification icons are in the upper navigation area of the GUI (see Figure 4-12). The left
icon indicates warning and error alerts that were recorded in the event log. The middle icon
shows running jobs and suggested tasks. The third rightmost icon offers a help menu with
content that is associated with the current tasks and the currently opened GUI menu.

Figure 4-12 Notification area

Alerts indication
The left icon in the notification area informs administrators about important alerts in the
systems. Click the icon to list warning messages in yellow and errors in red (see Figure 4-13
on page 155).

154 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-13 System alerts

You can go directly to the Events menu by clicking the View All Events option or see each
event message separately by clicking the Details icon of the specific message, analyzing the
content, and eventually running the suggested fix procedure, as shown in Figure 4-14.

Figure 4-14 Login excluded

Running jobs and suggested tasks


The middle icon in the notification area provides an overview of currently running tasks that
are triggered by administrator. It also includes the suggested tasks that recommend that
users perform specific configuration actions.

Chapter 4. IBM Spectrum Virtualize GUI 155


In the example that is shown in Figure 4-15, we have not yet defined remote support in the
system. Therefore, the system suggests that we do so and offers us direct access to the
associated Remote Support menu. Click Run Task to define remote support according to
the procedure that is explained in Chapter 13, “Reliability, availability, and serviceability,
monitoring and logging, and troubleshooting” on page 769. If you do not want to define
remote support now, click Not Now and the suggestion message disappears.

Figure 4-15 Configure Remote Support window

Similarly, you can analyze the details of running tasks (all of them together in one window or
of a single task). Click View to open the volume format job, as shown in Figure 4-16.

Figure 4-16 Details of a running task

The following information can be displayed as part of the running tasks:


򐂰 Volume migration
򐂰 MDisk removal
򐂰 Image mode migration
򐂰 Extent migration
򐂰 IBM FlashCopy
򐂰 Metro Mirror (MM) and Global Mirror (GM)
򐂰 Volume formatting
򐂰 Space-efficient copy repair
򐂰 Volume copy verification and synchronization
򐂰 Estimated time for the task completion

156 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Making selections
Recent updates to the GUI brought improved selection making. You can now select multiple
items more easily. Go to a wanted window, press and hold the Shift or Ctrl key, and make your
selection.

Pressing and holding the Shift key, select the first item in your list that you want, and then
select the last item. All items between the two that you choose are also selected, as shown in
Figure 4-17.

Figure 4-17 Selecting items by using the Shift key

Pressing and holding the Ctrl key, select any items from the entire list. You can select items
that do not appear in sequential order, as shown in Figure 4-18.

Figure 4-18 Selecting items by using the Ctrl key

You can also select items by using the built-in filtering function. For more information, see
4.3.1, “Content-based organization” on page 158.

Help
If you need help, you can select the (?) button, as shown in Figure 4-19.

Figure 4-19 Access help menu

Chapter 4. IBM Spectrum Virtualize GUI 157


You see two options:
򐂰 The first option opens a new tab with plain text information about the pane you are on and
its contents.
򐂰 The second option shows the same information in IBM Knowledge Center. This option
requires an internet connection, but the first option does not because the information is
stored locally on the system.

For example, in the Dashboard pane, you can open help information that is related to the
dashboard-provided information, as shown in Figure 4-20.

Figure 4-20 Example of Dashboard help content

4.3 System - Overview window


The System - Overview window is shown in Figure 4-21.

Figure 4-21 The System - Overview window

The next section describes the structure of the window and how to go to various system
components to manage them more efficiently and quickly.

4.3.1 Content-based organization


The following sections describe several view options within the GUI in which you can filter (to
minimize the amount of data that is shown on the window), sort, and reorganize the content of
the window.

158 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Table filtering
On most pages, a Filter box is available at the upper right side of the window. Use this option
if the list of object entries is too long and you want to search for something specific.

To use search filtering, complete the following steps:


1. In the Filter box that is shown in Figure 4-22, enter a search term by which you want to
filter. You can also use the drop-down menus to modify what the system searches for. For
example, if you want an exact match to your filter, select = instead of Contains. The first
drop-down list limits your filter to search through a specific column only, for example,
Name and State.

Figure 4-22 Filter search box

2. Enter the text string that you want to filter and press Enter.
By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include cayman
somewhere in the name. Cayman is highlighted in amber, as are any columns that contain
this information, as shown in Figure 4-23. The search option is not case-sensitive.

Figure 4-23 Showing filtered rows

3. Remove this filtered view by clicking the X icon that displays in the Filter box or by deleting
what you searched for and pressing Enter, as shown in Figure 4-24.

Figure 4-24 Removing the filtered view

Chapter 4. IBM Spectrum Virtualize GUI 159


Filtering: This filtering option is available in most menu options of the GUI.

Table information
In the table view, you can add or remove the information in the tables on most pages.

For example, on the Volumes pane, complete the following steps to add a column to the table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns displays, as shown in Figure 4-25.

right click here

Figure 4-25 Adding or removing details in a table

160 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Select the column that you want to add or remove from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 4-26.

Figure 4-26 Table with an added ID column

3. You can repeat this process several times to create custom tables to meet your
requirements.
4. Return to the default table view by selecting Restore Default View (the last entry) in the
column selection menu.

Sorting: By clicking a column, you can sort a table based on that column in ascending or
descending order.

Shifting columns in tables


You can move columns by clicking and moving the column right or left, as shown in
Figure 4-27. In this example, we attempt to move the Capacity column before the Pool
column.

Figure 4-27 Reorganizing table columns

Chapter 4. IBM Spectrum Virtualize GUI 161


4.4 Monitoring menu
Click the Monitoring icon in left pane to open the Monitoring menu (see Figure 4-28). The
Monitoring menu offers these navigation options:
򐂰 System: This option opens an overview of the system. It shows all control enclosures and
groups them into I/O groups if more than one control enclosure is present. Useful
information about each enclosure is displayed, including status, number of events against
each enclosure, and key enclosure information, such as ID and serial number. For more
information, see 4.4.1, “System overview” on page 163.
򐂰 Easy Tier Reports: This option gives you an overview of the Easy Tier Activities in your
system. Easy Tier eliminates manual intervention when you assign highly active data on
volumes to faster responding storage. In this dynamically tiered environment, data
movement is seamless to the host application regardless of the storage tier in which the
data belongs. However, you can manually change the default behavior. For example, you
can turn off Easy Tier on pools that have any combination of the four types of MDisks. If a
pool contains one type of MDisk, Easy Tier goes into balancing mode. When the pool
contains multiple types of MDisks, Easy Tier is automatically turned on. For more
information, see 4.4.2, “Easy Tier Reports” on page 167
򐂰 Events: This option tracks all informational, warning, and error messages that occurred in
the system. You can apply various filters to sort the messages according to your needs or
export the messages to an external comma-separated value (CSV) file. For more
information, see 4.4.3, “Events” on page 168.
򐂰 Performance: This option reports the general system statistics that relate to the processor
(CPU) utilization, host and internal interfaces, volumes, and MDisks. With this option, you
can switch between megabytes per second (MBps) or IOPS. For more information, see
4.4.4, “Performance” on page 169.
򐂰 Background Tasks: The option shows the progress of all tasks running in the background
as listed in 4.4.5, “Background Tasks” on page 170.

Figure 4-28 Monitoring menu

162 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4.4.1 System overview
The System option on the Monitoring menu provides a general overview. If you have more
than one control enclosure in a cluster, each enclosure has its own I/O group section (see
Figure 4-29).

Figure 4-29 System overview

This view shows all external components in real time. In Figure 4-29, you can see that one of
the canister identifies light-emitting diodes (LEDs) is lit (see red box). You can click any
component in the graphic view or on the list view at the bottom to view details. For example,
clicking a node brings up details, such as whether the node is online and which node is the
configuration node, as shown in Figure 4-30. For more information about the component, see
the right side under the Component Details section that shows when a component is
selected.

Select --> left click

Figure 4-30 Showing the node canister details

Chapter 4. IBM Spectrum Virtualize GUI 163


By right-clicking and selecting Properties, you see detailed technical parameters, such as
WWWN, Memory, and CPU, as shown in Figure 4-31.

Figure 4-31 Canister information

In an environment with multiple IBM Storage System clusters, you can easily direct the onsite
personnel or technician to the correct device by enabling the identification LED on the front
pane by completing the following steps:
1. Click Turn Identify On in the menu that is shown in Figure 4-32.

Figure 4-32 Turning on the Identify LED

2. Wait for confirmation from the technician that the device in the data center was correctly
identified. In the GUI, you see a flashing light, which indicates that the Identify LED was
turned on.

164 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. After the confirmation, click Turn LED Off (see Figure 4-33)

Figure 4-33 Turning off the Identify LED

Alternatively, you can use the command-line interface (CLI) to get the same results. Enter the
following commands in this sequence:
1. Type svctask chenclosure -identify yes 1 (or enter chenclosure -identify yes 1).
2. Type svctask chenclosure -identify no 1 (or enter chenclosure -identify no 1).

You can use the same CLI to obtain results for a specific controller or drive.

To view internal components (components that cannot be seen from the outside), review the
bottom of the GUI underneath where the list of external components is displayed. You can
select any of these components and details display in the right pane, as with the external
components. Figure 4-34 shows the backside of the enclosure.

Figure 4-34 Viewing the internal components

Chapter 4. IBM Spectrum Virtualize GUI 165


You can also choose SAS Chain View to view directly attached expansion enclosures, as
shown in Figure 4-35. A useful view of the entire serial-attached Small Computer System
Interface (SCSI) (SAS) chain is displayed, with selectable components that show port
numbers and canister numbers, along with a cable diagram for easy cable tracking.

Select

Figure 4-35 SAS Chain View

You can select any enclosure to get more information, including serial number and model
type, as shown in Figure 4-36, where Expansion Enclosure 3 is selected. You can also see
the Events and Component Details areas at the right side of the pane, which shows
information that relates to the enclosure or component that you select.

Figure 4-36 Enclosure Details window

166 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
With directly attached expansion enclosures, the view is condensed to show all expansion
enclosures on the right side, as shown in Figure 4-37. The number of events against each
enclosure and the enclosure status are displayed for quick reference. Each enclosure is
selectable, which brings you to the Expansion Enclosure View window.

Figure 4-37 System Overview with attached enclosures

4.4.2 Easy Tier Reports


The management GUI supports monitoring Easy Tier data movement in graphical reports to
help you understand what is happening with the performance of your storage device. Charts
for data movement, tier composition, and workload skew comparison can be viewed as
web-generated HTML files in a browser, or can be downloaded as CSV files.

Data is collected by the IBM Storage Tier Advisor Tool (IBM STAT) tool in 5-minute
increments. When data that is displayed in increments that are larger than 5 minutes (for
example, 1 hour), the data that is displayed for that 1 hour is the sum of all the data points that
were received for that 1-hour time span.

To view Easy Tier data and reports in the management GUI, select one of the following paths:
򐂰 From the management GUI, select Monitoring → Easy Tier Reports.
򐂰 From the management GUI, select Pools → View Easy Tier Reports.

Data Movement statistics


The Data Movement chart displays the migration actions that are triggered by Easy Tier.

Tier composition statistics


The Tier Composition chart displays the distributed workload between the top tier, middle tier,
and bottom tier. Each tier is composed of one or more tier types.

Chapter 4. IBM Spectrum Virtualize GUI 167


Workload Skew Comparison
The Workload Skew Comparison chart displays the percentage of IO workload compared to
the total capacity.

Figure 4-38 shows you how to export the Easy Tier Reports.

Figure 4-38 Easy Tier Reports

You can export your Easy Tier stats to a CSV file for further analysis. For more information
about Easy Tier Reports, see Chapter 9, “Advanced features for storage efficiency” on
page 463.

4.4.3 Events
The Events option, which is available in the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them, or export them to an external CSV file. A CSV file can be created from the information
that is shown here. Figure 4-39 provides an example of records in the system Event log.

Figure 4-39 Event log list

168 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For the error messages with the highest internal priority, perform corrective actions by running
fix procedures. Click Run Fix (see Figure 4-39 on page 168), and the fix procedure wizard
opens, as shown in Figure 4-40.

Figure 4-40 Performing a fix procedure

The wizard guides you through the troubleshooting and fixing process from a hardware or
software perspective. If you determine that the problem cannot be fixed without a technician’s
intervention, you can cancel the procedure execution at any time.

For more information about fix procedures, see Chapter 13, “Reliability, availability, and
serviceability, monitoring and logging, and troubleshooting” on page 769.

4.4.4 Performance
The Performance pane reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS, and drill down in the statistics to the node level. This capability might be useful when
you compare the performance of each control canister in the system if problems exist after a
node failover occurs (see Figure 4-41).

Figure 4-41 Performance statistics of the IBM Storage System

Chapter 4. IBM Spectrum Virtualize GUI 169


The performance statistics in the GUI show, by default, the latest 5 minutes of data. To see
details of each sample, click the graph and select the time stamp, as shown in Figure 4-42.

Figure 4-42 Sample details

The charts that are shown in Figure 4-42 represent 5 minutes of the data stream. For in-depth
storage monitoring and performance statistics with historical data about your system, use
IBM Spectrum Control or IBM Storage Insights.

You can also obtain a no-charge unsupported version of the Quick Performance Overview
(qperf) from this website.

4.4.5 Background Tasks


Use the Background Tasks pane to view and manage current tasks that are running on the
system (see Figure 4-43).

Figure 4-43 Selecting Background Tasks

This menu provides an overview of currently running tasks that are triggered by the
administrator. In contrast to the Running jobs and Suggested tasks indication in the middle of
top pane, it does not list the suggested tasks that administrators should consider performing.
The overview provides more details than the indicator, as shown in Figure 4-44.

Figure 4-44 List of running tasks

170 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can switch between each type (group) of operation, but you cannot show them all in one
list (see Figure 4-45).

Figure 4-45 Switching between types of background tasks

4.5 Pools
The Pools menu option is used to configure and manage storage pools, internal, and external
storage, MDisks, and to migrate old attached storage to the system.

The Pools menu contains the following items accessible from GUI (see Figure 4-46):
򐂰 Pools
򐂰 Volumes by Pool
򐂰 Internal Storage
򐂰 External Storage
򐂰 MDisks by Pool
򐂰 System Migration

Figure 4-46 Pools menu

For more information about storage pool configuration and management, see Chapter 5,
“Storage pools” on page 221.

Chapter 4. IBM Spectrum Virtualize GUI 171


4.6 Volumes
A volume is a logical disk that the system presents to attached hosts. By using GUI
operations, you can create different types of volumes depending on the type of topology that
is configured on your system.

The Volumes menu contains the following items, as shown in Figure 4-47:
򐂰 Volumes
򐂰 Volumes by Pool
򐂰 Volumes by Host
򐂰 Volumes by Host Cluster
򐂰 Cloud Volumes

Figure 4-47 Volumes menu

For more information about these tasks and configuration and management process
guidance, see Chapter 6, “Volumes” on page 277.

4.7 Hosts
A host system is a computer that is connected to the system through a Fibre Channel (FC)
interface or an IP network. It is a logical object that represents a list of worldwide port names
(WWPNs) that identify the interfaces that the host uses to communicate with your System. FC
and SAS connections use WWPNs to identify the host interfaces to the systems.

The Hosts menu consists of the following choices, as shown in Figure 4-48 on page 173:
򐂰 Hosts
򐂰 Host Clusters
򐂰 Ports by Host
򐂰 Mappings
򐂰 Volumes by Host
򐂰 Volumes by Host Cluster

172 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-48 Hosts menu

For more information about configuration and management of hosts by using the GUI, see
Chapter 7, “Hosts” on page 369.

4.8 Copy Services


The IBM Spectrum Virtualize Copy Services and Volumes Copy operations are based on the
FlashCopy function. In its basic mode, the function creates copies of content on a source
volume to a target volume. Any data on the target volume is lost and is replaced by the copied
data.

More advanced functions allow FlashCopy operations to occur on multiple source and target
volumes. Management operations are coordinated to provide a common, single point-in-time
(PiT) for copying target volumes from their respective source volumes. This technique creates
a consistent copy of data that spans multiple volumes.

The Copy Services menu offers the following operations in the GUI, as shown in Figure 4-49:
򐂰 FlashCopy
򐂰 Consistency groups
򐂰 FlashCopy Mappings
򐂰 Remote Copy (RC)
򐂰 Partnership

Figure 4-49 Copy Services in GUI

Because Copy Services is one of the most important features for resiliency solutions, see
Chapter 10, “Advanced Copy Services” on page 505.

Chapter 4. IBM Spectrum Virtualize GUI 173


4.9 Access
The Access menu in the GUI maintains who can log in to the system, defines the access
rights to the user, and tracks what was done by each privileged user to the system. It is
logically split into three categories:
򐂰 Ownership groups
򐂰 Users by group
򐂰 Audit log

In this section, we explain how to create, modify, or remove a user, and how to see records in
the audit log.

The Access menu is available from the left pane, as shown in Figure 4-50.

Figure 4-50 Access menu

4.9.1 Ownership groups


An ownership group defines a subset of users and objects within the system. You can create
ownership groups to further restrict access to specific resources that are defined in the
ownership group. Only users with Administrator or Security Administrator roles can configure
and manage ownership groups. Ownership groups restrict access to only those objects that
are defined within that ownership group. An owned object can belong to one ownership group.
An owner is a user with an ownership group that can view and manipulate objects within that
group.

The system supports several resources that you assign to ownership groups:
򐂰 Child pools
򐂰 Volumes
򐂰 Volume groups
򐂰 Hosts
򐂰 Host clusters
򐂰 Host mappings
򐂰 FlashCopy mappings
򐂰 FlashCopy consistency groups

When a user group is assigned to an ownership group, the users in that user group retain
their role but are restricted to only those resources within the same ownership group. User
groups can define the access to operations on the system, and the ownership group can
further limit access to individual resources. For example, you can configure a user group with
the Copy Operator role, which limits access of the user to Copy Services functions, such as
FlashCopy and RC operations. Access to individual resources, such as a specific FlashCopy
consistency group, can be further restricted by adding it to an ownership group. When the
user logs on to the management GUI, only resources that they have access to through the
ownership group are displayed. Additionally, only events and commands that are related to
the ownership group in which a user belongs are viewable by those users.

174 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Inheriting ownership
Depending on the type of resource, ownership can be defined explicitly or ownership can be
inherited from the user, user group, or from other parent resources. Objects inherit their
ownership group from other objects whenever possible:
򐂰 Volumes inherit the ownership group from the child pool that provides capacity for the
volumes.
򐂰 FlashCopy mappings inherit the ownership group from the volumes that are configured in
the mapping.
򐂰 Hosts inherit the ownership group from the host cluster they belong to, if applicable.
򐂰 Host mappings inherit the ownership group from both the host and the volume to which
the host is mapped.

These objects cannot be explicitly moved to a different ownership group without creating
inconsistent ownership.

Ownership groups are also inherited from the user. Objects that are created by an owner
inherit the ownership group of the owner. If the owner is in more than one ownership group
(only possible for remote users), then the owner must choose the group when the object is
created.

The following objects have ownership that is assigned explicitly and do not inherit ownership
from other parent resources:
򐂰 Child pools
򐂰 Host clusters
򐂰 Hosts that are not part of a host cluster
򐂰 Volume groups
򐂰 Figure on page 177
򐂰 User groups
򐂰 Hosts that are a part of a host cluster
򐂰 Volumes
򐂰 Users
򐂰 Volume-to-host mappings
򐂰 FlashCopy mappings
򐂰 Configuring ownership groups
򐂰 Migrating to ownership groups

Child pools
The following rules apply to child pools that are defined in ownership groups:
򐂰 Child pools can be assigned to an ownership group when you create a pool or change a
pool.
򐂰 Users who assign the child pool to the ownership group cannot be defined within that
ownership group.
򐂰 Resources that are within the child pool inherit the ownership group that is assigned for
the child pool.

Chapter 4. IBM Spectrum Virtualize GUI 175


Host clusters
The following rules apply to host clusters that are defined in ownership groups:
򐂰 If the user who is creating the host cluster is defined in only one ownership group, the host
cluster inherits the ownership group of that user.
򐂰 If the user is defined in an ownership group but is also defined in multiple user groups, the
host cluster inherits the ownership group. The system uses the lowest role that the user
has from the user group. For example, if a user is defined in two user groups with the roles
of Monitor and Copy Operator, the host cluster inherits the Monitor role.
򐂰 Only users not within an ownership group can assign ownership groups when a host
cluster is created or changed.

Hosts that are not part of a host cluster


The following rules apply to a host that are not part of a host cluster that is defined in
ownership groups:
򐂰 If the user who is creating the host is in only one ownership group, the host cluster inherits
the ownership group of that user.
򐂰 If the user is defined in an ownership group but is also defined in multiple user groups, the
host inherits the ownership group. The system uses the lowest role that the user has from
the user group. For example, if a user is defined in two user groups with the roles of
Monitor and Copy Operator, the host inherits the Monitor role.
򐂰 Only users not within an ownership group can assign ownership groups when you create a
new host or change an existing host.

Volume groups
Volume groups can be created to manage multiple volumes that are used with Transparent
Cloud Tiering (TCT) support. The following rules apply to volume groups that are defined in
ownership groups:
򐂰 If the user that is creating the volume group is defined in only one ownership group, the
volume group inherits the ownership group of that user.
򐂰 If the user is defined in an ownership group but is also defined in multiple user groups, the
volume group inherits the ownership group. The system uses the lowest role that the user
has from the user group. For example, if a user is defined in two user groups with the roles
of Monitor and Copy Operator, the host inherits the Monitor role.
򐂰 Only users not within an ownership group can assign ownership groups when you create a
new volume group or change an existing volume group.
򐂰 Volumes can be added to a volume group if both the volume and the volume group are
within the same ownership group or if both are not in an ownership group. There are
situations where a volume group and its volumes can belong to different ownership
groups. Volume ownership can be inherited from the ownership group or from one or more
child pools.
򐂰 The ownership of a volume group does not affect the ownership of the volumes it contains.
If a volume group and its volumes are owned by different ownership groups, then the
owner of the child pool that contains the volumes can change the volume directly. For
example, the owner of the child pool can change the name of a volume within it. The
owner of the volume group can change the volume group itself and indirectly change the
volume, such as deleting a volume from the volume group. Neither the ownership group of
the child pools or the owner of the volume group can directly manipulate the resources
that are not defined in their ownership group.

176 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
FlashCopy consistency groups
FlashCopy consistency groups can be created to manage multiple FlashCopy mappings. The
following rules apply to FlashCopy consistency groups that are defined in ownership groups:
򐂰 If the user that is creating the FlashCopy consistency group is in only one ownership
group, the FlashCopy consistency group inherits the ownership group of that user.
򐂰 If the user is defined in an ownership group but is also defined in multiple user groups, the
FlashCopy consistency group inherits the ownership group. The system uses the lowest
role that the user has from the user group.
򐂰 Only users not within an ownership group can assign ownership groups when a
FlashCopy consistency is created or changed.
򐂰 FlashCopy mappings can be added to a consistency group if the volumes in the mapping
and the consistency group are within the same ownership group. You can also add a
FlashCopy mapping to a consistency group if it and all of its dependent resources are not
in an ownership group.
򐂰 There are situations where a FlashCopy consistency group and its resources can belong
to different ownership groups.
򐂰 As with volume groups and volumes, the ownership of the consistency group has no
impact on the ownership of the mappings it contains.

User groups
The following rules apply to user groups that are defined in ownership groups:
򐂰 If the user that is creating the user group is in only one ownership group, the user group
inherits the ownership group of that user.
򐂰 If the user is with multiple user groups, the user group inherits the ownership group of the
user group with the lowest role.
򐂰 Only users not within an ownership group can assign an ownership group when a user
group is created or changed.

These resources inherit ownership from the parent resource. A user cannot change the
ownership group of the resource, but can change the ownership group of the parent object.

Hosts that are a part of a host cluster


The following rules apply to hosts that are defined in ownership groups:
򐂰 The host inherits the ownership group of the host cluster to which it belongs.
򐂰 If a host is removed from a host cluster within an ownership group, the host inherits the
ownership group of the host cluster to which it used to belong.
򐂰 If a host is removed from a host cluster that is not within an ownership group, the host
inherits no ownership groups.
򐂰 Hosts can be added to a host cluster if the host and host cluster have the same ownership
group.
򐂰 Changing the ownership group of a host cluster automatically changes the ownership
group of all the hosts inside the host cluster.

Chapter 4. IBM Spectrum Virtualize GUI 177


Volumes
The following rules apply to volumes that are defined in ownership groups:
򐂰 The volume inherits the ownership group of the child pools that provide capacity for the
volume and its copies.
򐂰 If the child pool that provides capacity for the volume or its copies is defined in different
ownership groups, then the volume cannot be created in an ownership group.
򐂰 With volume groups, the volume group and its volumes can belong to different ownership
groups. However, the ownership of a volume group does not impact the ownership of the
volumes that it contains.

Users
The following rules apply to users that are defined in ownership groups:
򐂰 A user inherits the ownership group of the user group to which it belongs.
򐂰 Users that use Lightweight Directory Access Protocol (LDAP) for remote authentication
can belong to multiple user groups and multiple ownership groups.

Volume-to-host mappings
The following rules apply to volume-to-host mappings that are defined in ownership groups:
򐂰 Volume-to-host mappings inherit the ownership group of the host or host cluster and
volume in the mapping.
򐂰 If host or host cluster and volume are within different ownership groups, then the mapping
cannot be assigned an ownership group.

FlashCopy mappings
The following rules apply to FlashCopy mappings that are defined in ownership groups:
򐂰 FlashCopy mappings inherit the ownership group of both volumes that are defined in the
mapping.
򐂰 If the volumes are within different ownership groups, then the mapping cannot be assigned
to an ownership group.
򐂰 Like with FlashCopy consistency groups, it is possible for a consistency group and its
mappings to belong to different ownership groups. However, the ownership of the
consistency group has no impact on the ownership of the mappings that it contains.

Configuring ownership groups


You can configure ownership groups to manage access to resources on the system. An
ownership group defines a subset of users and objects within the system. You can create
ownership groups to further restrict access to specific resources that are defined in the
ownership group. Only users with Administrator or Security Administrator roles can configure
and manage ownership groups.

Migrating to ownership groups


If you updated your system to a software level that supports ownership groups, you must
reconfigure certain resources if you want to configure ownership groups. An ownership group
defines a subset of users and objects within the system. You can create ownership groups to
further restrict access to specific resources that are defined in the ownership group. Only
users with Administrator or Security Administrator roles can configure and manage ownership
groups.

178 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-51 shows an example of an ownership group.

Figure 4-51 Ownership by Groups

4.9.2 Users by groups


You can create local users who can access the system. These user types are defined based
on the administrative privileges that they have on the system.

Local users must provide a password, Secure Shell (SSH) key, or both. Local users are
authenticated through the authentication methods that are configured on the system. If the
local user needs access to the management GUI, a password is needed for the user. If the
user requires access to the CLI through SSH, a password or a valid SSH key file is necessary.

Local users must be part of a user group that is defined on the system. User groups define
roles that authorize the users within that group to a specific set of operations on the system.

To define your user group in your system, select Access → Users by Group, as shown in
Figure 4-52.

Figure 4-52 Accessing Users by Group

Chapter 4. IBM Spectrum Virtualize GUI 179


Select Create User Group, as shown in Figure 4-53.

Figure 4-53 Defining a User Group

The following privileged user group roles exist in IBM Spectrum Virtualize:
򐂰 Monitor
These users can access all system viewing actions. Monitor role users cannot change the
state of the system or the resources that the system manages. Monitor role users can
access all information-related GUI functions and commands, back up configuration data,
and change their own passwords.
򐂰 Copy Operator
These users can start and stop all existing FlashCopy, MM, and GM relationships. Copy
Operator role users can run the system commands that Administrator role users can run
that deal with FlashCopy, MM, and GM relationships.
򐂰 Service
These users can set the time and date on the system, delete dump files, add and delete
nodes, apply service, and shut down the system. Users can also complete the same tasks
as users in the monitor role.
򐂰 Administrator
These users can manage all functions of the system except for those functions that
manage users, user groups, and authentication. Administrator role users can run the
system commands that the Security Administrator role users can run from the CLI, except
for commands that deal with users, user groups, and authentication.
򐂰 Security Administrator
These users can manage all functions of the system, including managing users, user
groups, user authentication, and configuring encryption. Security Administrator role users
can run any system commands from the CLI. However, they cannot run the sainfo and
satask commands from the CLI. Only the superuser ID can run those commands.
򐂰 Restricted Administrator
These users can perform the same tasks and run most of the same commands as
Administrator role users. However, users with the Restricted Administrator role are not
authorized to run the rmvdisk, rmvdiskhostmap, rmhost, or rmmdiskgrp commands.
Support personnel can be assigned this role to help resolve errors and fix problems.

180 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 3-Site Administrator
These users can configure, manage, and monitor 3-site replication configurations through
certain command operations that are available only on the 3-Site Orchestrator. Before you
can work with 3-Site Orchestrator, a user profile must be created.
򐂰 vSphere APIs for Storage Awareness (VASA) Provider
These users can manage VMware vSphere Virtual Volumes (VVOLS).

Registering a user
After you define your group (in our example, Redbook1 with Administrators privileges), you
can register a user within this group by clicking Create User and selecting MyGroup (see
Figure 4-54).

Figure 4-54 Registering a user account

Chapter 4. IBM Spectrum Virtualize GUI 181


Deleting a user
To remove a user account, right-click the user in the All Users list and select Delete, as
shown in Figure 4-55.

Figure 4-55 Deleting a user account

Attention: When you click Delete, the user account is directly deleted. No other
confirmation request is presented.

182 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Setting a new password
To set a new password for the user, right-click the user (or click Actions) and select
Properties. In this window, you can either assign the user to a different group or reset their
password (see Figure 4-56).

Figure 4-56 Setting a new password

4.9.3 Audit log


An audit log documents actions that are submitted through the management GUI or the CLI.
You can use the audit log to monitor user activity on your system.

The audit log entries provide the following information:


򐂰 Time and date when the action or command was submitted.
򐂰 Name of the user who completed the action or command.
򐂰 IP address of the system where the action or command was submitted.
򐂰 Name of source and target node on which the command was submitted.
򐂰 Parameters that were submitted with the command, excluding confidential information.

Chapter 4. IBM Spectrum Virtualize GUI 183


򐂰 Results of the command or action that completed successfully.
򐂰 Sequence number and the object identifier that is associated with the command or action.

An example of the audit log is shown in Figure 4-57.

Figure 4-57 Audit log

The following commands are not documented in the audit log:


򐂰 dumpconfig
򐂰 cpdumps
򐂰 finder
򐂰 dumperrlog

The following items are also not documented in the audit log:
򐂰 Commands that fail are not logged.
򐂰 A result code of 0 (success) or 1 (success in progress) is not logged.
򐂰 Result object ID of node type (for the addnode command) is not logged.
򐂰 Views are not logged.

Important: Failed commands are not recorded in the audit log. Commands that are
triggered by IBM Support personnel are recorded with the flag Challenge because they
use challenge-response authentication.

184 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4.10 Settings
Use the Settings pane to configure system options for notifications, security, IP addresses,
and preferences that are related to display options in the management GUI (see Figure 4-58).

Figure 4-58 Settings menu

The following options are available for configuration from the Settings menu:
򐂰 Notifications: The system can use Simple Network Management Protocol (SNMP) traps,
syslog messages, and Call Home emails to notify you and IBM Support Center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
򐂰 Network: Use the Network pane to manage the management IP addresses for the system,
service IP addresses for the nodes, and internet Small Computer Systems Interface
(iSCSI) and FC configurations. The system must support FC or Fibre Channel over
Ethernet (FCoE) connections to your storage area network (SAN).
򐂰 Security: Use the Security pane to configure and manage remote authentication services.
򐂰 System: Use the System menu to manage overall system configuration options, such as
licenses, updates, and date and time settings.
򐂰 Support: Use this option to configure and manage connections, and upload support
packages to the support center.
򐂰 GUI Preferences: Configure welcome message after login, and refresh internals and GUI
logout timeouts.

4.10.1 Notifications
Your IBM Storage System can use SNMP traps, syslog messages, and Call Home email to
notify you and the IBM Support Center when significant events are detected. Any combination
of these notification methods can be used simultaneously.

Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.

SNMP notifications
SNMP is a standard protocol for managing networks and exchanging messages. The system
can send SNMP messages that notify personnel about an event. You can use an SNMP
manager to view the SNMP messages that are sent by your storage system.

Chapter 4. IBM Spectrum Virtualize GUI 185


To view the SNMP configuration, click the Settings icon and select Notification → SNMP
(see Figure 4-59).

Figure 4-59 Setting the SNMP server and traps

In Figure 4-59, you can view and configure an SNMP server to receive various informational,
error, or warning notifications by setting the following information:
򐂰 IP Address
The address for the SNMP server.
򐂰 Server Port
The remote port number for the SNMP server. The remote port number must be a value
1 - 65535.
򐂰 Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
򐂰 Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
To remove an SNMP server, click the minus sign (-). To add another SNMP server, click
the plus sign (+).

186 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event. You can use the Syslog pane to view the
syslog messages that are sent by the system. To view the Syslog configuration, go to the
System pane and click Settings, and select Notification → Syslog (see Figure 4-60). A
domain name server (DNS) server is required to use domain names in syslog.

Figure 4-60 Setting the syslog messages

From this window, you can view and configure a syslog server to receive log messages from
various systems and store them in a central repository by entering the following information:
򐂰 IP Address
The IP address for the syslog server.
򐂰 Port
Port number of the syslog server
򐂰 Protocol of the transmission protocol
Select UDP or TCP.
򐂰 Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
򐂰 Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
򐂰 Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

Chapter 4. IBM Spectrum Virtualize GUI 187


– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

To remove a syslog server, click the minus sign (-). To add another syslog server, click the
plus sign (+).

The syslog messages can be sent in concise message format or expanded message format.

Example 4-1 shows a compact format syslog message.

Example 4-1 Compact syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=FSCluster1 #Timestamp=Wed Oct 22 [Link] 2019 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100

Example 4-2 shows an expanded format syslog message.

Example 4-2 Full format syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=FSCluster1 #Timestamp=Wed Oct 22 [Link] 2019 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2
#NodeID=2 #MachineType=2076724#SerialNumber=1234567 #SoftwareVersion=[Link]
(build 150.15.1910190824000)#FRU=fan 24P1118, system board 24P1234
#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000

4.10.2 Network
This section describes how to view the network properties of the storage system. The
network information can be obtained by clicking Network, as shown in Figure 4-61.

Figure 4-61 Accessing network information

Configuring the network


The procedure to set up and configure an IBM Storage System network interface is described
in Chapter 3, “Initial configuration” on page 105.

188 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Management IP addresses
To view the management IP addresses of IBM Spectrum Virtualize, select Settings →
Network, and click Management IP Addresses. The GUI shows the management IP
address by pointing to the network ports, as shown in Figure 4-62.

Figure 4-62 Viewing the management IP addresses

Service IP information
To view the Service IP information of your IBM Spectrum Virtualize installation, select
Settings → Network, as shown in Figure 4-61 on page 188. Click the Service IP Address
option to view the properties, as shown in Figure 4-63.

Figure 4-63 Viewing service IP addresses

The service IP address is commonly used to provide access to the network interfaces on
each individual node of the control enclosure.

Instead of reaching the management IP address, the service IP address directly connects to
each individual node canister for service operations. You can select a node canister of the
control enclosure from the drop-down list and then click any of the ports that are shown in the
GUI. The service IP address can be configured to support IPv4 or IPv6.

Ethernet ports
Ethernet ports for each node are at the rear of the system and used to connect the system to
hosts, external storage systems, and to other systems that are part of RC partnerships.
Depending on the model of your system, supported connection types include FC, when the
ports are FCoE-capable, iSCSI, and iSCSI Extensions for Remote Direct Memory Access
(RDMA) (iSER). iSER connections use either the RDMA over Converged Ethernet (RoCE)
protocol or the internet Wide Area RDMA Protocol (iWARP). The panel indicates whether a
specific port is being used for a specific purpose and traffic.

Chapter 4. IBM Spectrum Virtualize GUI 189


You can modify how the port is used by selecting Actions. Select either Modify Remote
Copy, Modify iSCSI Hosts, or Modify Storage Ports to change the use of the port. You can
also display the login information for each host that is logged in to a selected node.

To display this information, select Settings → Network → Ethernet Ports and right-click the
node and select IP Login Information. This information can be used to detect connectivity
issues between the system and hosts and to improve the configuration of iSCSI host to
optimize performance. Select Ethernet Ports for an overview from the menu, as shown in
Figure 4-64. For planning, see Chapter 2, “Planning” on page 77.

Figure 4-64 Ethernet Ports

Priority flow control


Priority flow control (PFC) is an Ethernet protocol that you can use to select the priority of
different types of traffic within the network. With PFC, administrators can reduce network
congestion by slowing or pausing certain classes of traffic on ports, thus providing better
bandwidth for more important traffic. The system supports PFC on various supported
Ethernet-based protocols on three types of traffic classes: system, host attachment, and
storage traffic. You can configure a priority tag for each of these traffic classes. The priority
tag can be any value 0 - 7. You can set identical or different priority tag values to all these
traffic classes. You can also set bandwidth limits to ensure quality of service (QoS) for these
traffic classes by using the Enhanced Transmission Selection (ETS) setting on the network.
When you plan to configure PFC, follow these guidelines and examples.

To use PFC and ETS, ensure that the following tasks are completed:
򐂰 Ensure that ports support 10 Gb or higher bandwidth to use PFC settings.
򐂰 Configure a virtual local area network (VLAN) on the system to use PFC capabilities for
the configured IP version.
򐂰 Ensure that the same VLAN settings are configured on the all entities, including all
switches between the communicating end points.
򐂰 Configure the QoS values (priority tag values) for host attachment, storage, or system
traffic by running the chsystemethernet command.
򐂰 To enable priority flow for host attachment traffic on a port, make sure that the host flag is
set to yes on the configured IP on that port.
򐂰 To enable priority flow for storage traffic on a port, make sure that storage flag is set to yes
on the configured IP on that port.
򐂰 On the switch, enable the Data Center Bridging Exchange (DCBx). DCBx enables switch
and adapter ports to exchange parameters that describe traffic classes and PFC
capabilities. For these steps, check your switch documentation for details.

190 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 For each supported traffic class, configure the same priority tag on the switch. For
example, if you plan to have a priority tag setting of 3 for storage traffic, ensure that the
priority is also set to 3 on the switch for that traffic type.
򐂰 If you are planning on using the same port for different types of traffic, ensure that the ETS
settings are configured on the network.

4.10.3 Using the management GUI


To set PFC on the system, complete the following steps
1. In the management GUI, select Settings → Network → Priority Flow Control, as shown
in Figure 4-65.

Figure 4-65 Priority flow control

2. For each of following classes of service, select the priority setting for that traffic type:
– System
Set a value 0 - 7 for the system traffic, which includes communication between nodes
within the system. The system priority tag is supported on iSCSI connections and on
systems that support RDMA over Ethernet connections between nodes. Ensure that
you set the same priority tag on the switch to use PFC capabilities.
– Host attachment
Set the priority tag 0 - 7 for system to host traffic. The host attachment priority tag is
supported on iSCSI connections and on systems that support RDMA over Ethernet
connections. Ensure that you set the same priority tag on the switch to use PFC
capabilities.
– Storage virtualization
Set the priority tag 0 - 7 for system to external storage traffic. The storage virtualization
priority tag is supported on storage traffic over iSCSI connections. Ensure that you set
the same priority tag on the switch to use PFC capabilities.
Make sure that IP is configured with VLAN.

Chapter 4. IBM Spectrum Virtualize GUI 191


iSCSI information
From the iSCSI pane in the Settings menu, you can display and configure parameters for the
system to connect to iSCSI-attached hosts, as shown in Figure 4-66.

Figure 4-66 iSCSI Configuration pane

The following parameters can be updated:


򐂰 System Name
It is important to set the system name correctly because it is part of the iSCSI Qualified
Name (IQN) for the node.

Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.

To change the system name, click the system name and specify the new name.

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.

򐂰 iSCSI aliases (optional)


An iSCSI alias is a user-defined name that identifies the node to the host. Complete the
following steps to change an iSCSI alias:
a. Click an iSCSI alias.
b. Specify a name for it.
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host starts the iSCSI connection to a target node, this IQN from the target node is visible in
the iSCSI configuration tool on the host.
򐂰 Internet Storage Name Service (iSNS) and Challenge Handshake Authentication Protocol
(CHAP)
You can specify the IP address for the iSNS. Host systems use the iSNS server to manage
iSCSI targets and for iSCSI discovery.
You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the
specified shared secret.

192 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts that use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system and host definition. You can create an iSCSI host definition
without using CHAP.

Fibre Channel information


As shown in Figure 4-67, you can use the Fibre Channel Connectivity pane to display the
FC connectivity between nodes and other storage systems and hosts that attach through the
FC network. You can filter by selecting one of the following fields:
򐂰 All nodes, storage systems, and hosts
򐂰 Systems
򐂰 Nodes
򐂰 Storage systems
򐂰 Hosts

You can view Fibre Channel Connectivity as shown in Figure 4-67.

Figure 4-67 Fibre Channel Connectivity

Chapter 4. IBM Spectrum Virtualize GUI 193


In the Fibre Channel Ports pane, you can use this view to display how the FC port is
configured across all control node canisters in the system. This view helps, for example, to
determine which other clusters and hosts the port may communicate with, and which ports
are virtualized. “No” indicates that this port cannot be online on any node other than the
owning node (see Figure 4-68).

Figure 4-68 Viewing Fibre Channel Port properties

Non-Volatile Memory Express connectivity


A Non-Volatile Memory Express (NVMe) over FC host can be attached to the system. For
other specific information about NVMe over Fibre Channel (FC-NVMe), such as
interoperability requirements, see V8.3.0.x Configuration Limits and Restrictions for IBM
Storwize V7000 and V7000F.

If your system supports an FC-NVMe connection between nodes and hosts, you can display
details about each side of the connection. To display node details, select the node from the
drop-down menu and select Show Results. You can also display the host details for the
connection or for all hosts and nodes. Use this window to troubleshoot issues between nodes
and hosts that use FC-NVMe connections.

For these connections, the Status column displays the current state of the connection. The
following states for the connection are possible:
򐂰 Active
Indicates that the connection between the node and host is being used.
򐂰 Inactive
Indicates that the connection between the node and host is configured, but no FC-NVMe
operations have occurred in the last 5 minutes. Since the system sends periodic heartbeat
message to keep the connection open between the node and the host, it is unusual to see
an inactive state for the connection. However, it can take up to 5 minutes for the state to
change from inactive to active. If the inactive state remains beyond the 5-minute refresh
interval, it can indicate a connection problem between the host and the node. If a
connection problem persists between the host and the node, a reduced node login count
or the status of the host indicates it is degraded, which you can view by selecting Hosts →
Hosts by Port in the management GUI. Verify these values in the management GUI, and
view the messages by selecting Monitoring → Events.

194 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-69 shows the NVMe Connectivity menu.

Figure 4-69 NVMe Connectivity window

Consider FC-NVMe target limits when you plan and configure the hosts. Include the following
points in your plan:
򐂰 An NVMe host can connect to four NVMe controllers on each system node. The maximum
per node is four with an extra four in failover.
򐂰 Zone up to four ports in a single host to detect up to four ports on a node. To allow failover
and avoid outages, zone the same or extra host ports to detect an extra four ports on the
second node in the I/O group.
򐂰 A single I/O group can contain up to 256 FC-NVMe I/O controllers. The maximum number
of I/O controllers per node is 128 plus an extra 128 in failover. Zone a total maximum of 16
hosts to detect a single I/O group. Also, consider that a single system target port may have
up to 16 NVMe I/O controllers.

When you install and configure attachments between the system and a host that runs the
Linux operating system (OS), follow specific guidelines. For more information about these
guidelines, see Linux specific guidelines.

4.10.4 Security
Use the Security option from the Settings menu (as shown in Figure 4-70) to view and
change security settings, authenticate users, and manage secure connections.

Figure 4-70 Security menu

Chapter 4. IBM Spectrum Virtualize GUI 195


Remote Authentication
In the Remote Authentication pane, you can configure remote authentication with LDAP, as
shown in Figure 4-71. By default, the system has local authentication enabled. When you
configure remote authentication, you do not need to configure users on the system or assign
more passwords. Instead, you can use your passwords and user groups that are defined on
the remote service to simplify user management and access, enforce password policies more
efficiently, and separate user management from storage management.

Figure 4-71 Configuring Remote Authentication

Encryption
As shown in Figure 4-72, you can enable or disable the encryption function on an IBM
Storage System. For more information, see Chapter 12, “Encryption” on page 701.

Figure 4-72 Enable Encryption

196 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Secure Communications
To enable or manage secure communications, select the Secure Communications pane, as
shown in Figure 4-73. Before you create a request for either type of certificate, ensure that
your current browser does not have restrictions about the type of keys that are used for
certificates.

Some browsers limit the use of specific key-types for security and compatibility issues. Select
Update Certificate to add new certificate details, including certificates that were created and
signed by a third-party certificate authority (CA).

Figure 4-73 Configuring Secure Communications

4.10.5 System menus


Click the System option from the Settings menu (see Figure 4-74) to view and change the
time and date settings, work with licensing options, download configuration settings, work
with VMware VVOLs and IP Quorum, or download software upgrade packages.

Figure 4-74 System option

Chapter 4. IBM Spectrum Virtualize GUI 197


Date and time
To view or configure the date and time settings, complete the following steps:
1. From the main System pane, click Settings and click System.
2. In the left column, select Date and Time, as shown in Figure 4-75.

Figure 4-75 Date and Time window

3. From this pane, you can modify the following information:


– Time zone
Select a time zone for your system by using the drop-down list.
– Date and time
The following options are available:
• If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 4-76 on page 199. You can click Use Browser Settings to automatically
adjust the date and time of your system with your local workstation date and time.

198 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-76 Set Date and Time window

• If you are using an NTP server, select Set NTP Server IP Address, and then enter
the IP address of the NTP server, as shown in Figure 4-77.

Figure 4-77 Set NTP Server IP Address window

4. Click Save.

Licensing
The base license that is provided with your system includes the use of its basic functions.
However, the extra licenses can be purchased to expand the capabilities of your system.
Administrators are responsible for purchasing extra licenses and configuring the systems
within the license agreement, which includes configuring the settings of each licensed
function on the system.

Depending on the platform, different license schemes may be used:


򐂰 The IBM FlashSystem 5100 system uses enclosure-based licensing, which allows the use
of certain licensed functions that are based on the number of enclosures that are indicated
in the license.
򐂰 IBM FlashSystem 7200, IBM FlashSystem 9100, and IBM FlashSystem 9200 systems use
differential licensing for external virtualization, and capacity-based licensing for other
functions.

Differential licensing charges different rates for different types of virtualized storage, which
provides cost-effective management of capacity across multiple tiers of storage. It is based on
the number of storage capacity units (SCUs) that are purchased.

Each SCU corresponds to a different amount of usable capacity based on the type of storage.

Chapter 4. IBM Spectrum Virtualize GUI 199


Table 4-1 shows the different storage types and the associated SCU ratios.

Table 4-1 SCU ratio per storage type


Storage type SCU ratio
Tier storage-class memory One SCU equates to 1 TiB of solid-state drive (SSD) or flash
(SCM), Tier 0 and Tier 1 storage.

Enterprise Tier One SCU equates to 1.18 TiB of Enterprise storage.

Nearline (NL) Tier One SCU equates to 4.00 TiB of NL storage.

License settings are initially entered in a system initialization wizard. They can also be
changed later.

To view or configure the licensing settings, complete the following steps:


1. From the main Settings pane, click Settings and select System.
2. In the left column, click Licensed Functions. The example in Figure 4-78 shows the
License Functions pane of an IBM FlashSystem 9100 system, which uses differential
licensing for External Virtualization.

Figure 4-78 Licensing window

3. In the Licensed Functions pane, you can view or set the licensing options for the IBM
Storage System for the following elements:
– External Virtualization
You can enter the number of SCU units that are licensed for External Virtualization.
When monitoring External Virtualization license usage, consider the following items:
• The license accounts for usable MDisk capacity. For example, one SCU is used for
one 1 TB Tier0 MDisk when it is assigned to a storage pool, independently of the
amount of the actual data written on the MDisk.
• SCU is used for complete and incomplete chunks of MDisk capacity. For example, if
a combined capacity of all NL Tier MDisks in your system is 5 TB, two SCUs are
needed: one SCU for 4 TB of NL storage, and one SCU for another 4 TB (even if 1
TB is used).

200 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If your system uses enclosure-based licensing, specify the number of enclosures of
external storage systems that are attached to your IBM Storage System. Data can be
migrated from storage systems to your system that uses the External Virtualization
function within 45 days of purchase of the system without purchase of a license. After
45 days, any ongoing use of the External Virtualization function requires a license for
each enclosure in each external system.

– FlashCopy (if required on the platform)


The FlashCopy function copies the contents of a source volume to a target volume. It is
also used to create cloud snapshots of volumes in systems that have TCT enabled.
FlashCopy can be licensed in terabytes (TB). In this case, the used capacity for
FlashCopy mappings is the sum of all of the volumes that are the source volumes of a
FlashCopy mapping and volumes with cloud snapshots.
If licensed in enclosures, FlashCopy can be used on a total number of internal
enclosures and virtualized (external) enclosures.
– Remote mirroring
The remote mirroring function configures a relationship between two volumes. This
function mirrors updates that are made to one volume to another volume. The volumes
can be in the same system or on two different systems.
If a remote mirroring function is licensed per enclosure, you can use the remote
mirroring functions on the total number of enclosures that are licensed. The total
number of enclosures must include the enclosures on external storage systems that
are licensed for virtualization and the number of control and expansion enclosures that
are part of your local system.
If licensed by capacity, the function specifies the amount of data that can be replicated.
The used capacity for remote mirroring is the sum of the capacities of all of the
volumes that are in an MM or GM relationship. Both master and auxiliary volumes are
counted.
The license settings apply only to the system on which you are configuring license
settings. For RC partnerships, a license is also required on any remote systems that
are in the partnership.

– Compression (if required on a platform)


With the compression function, data is compressed as it is written to disk, which saves
extra capacity for the system. If a compression license is not included in a base license
for your platform, it must be purchased separately for each enclosure.
– Encryption license
In addition to these enclosure-based licensed functions, the system also supports
encryption through a key-based license. Key-based licensing requires an authorization
code to activate encryption on the system. Only certain models of the control
enclosures support encryption.
During initial setup, you can select to activate the license with the authorization code.
The authorization code is sent with the licensed function authorization documents that
you receive after purchasing the license. These documents contain the authorization
codes that are required to obtain keys for the encryption function that you purchased
for your system.

Chapter 4. IBM Spectrum Virtualize GUI 201


Encryption is activated on a per system basis, and an active license is required for
each control enclosure that uses encryption. During system setup, the system detects
any SAS attached enclosures and applies the license to these enclosures. If control
enclosures are added and require encryption, more encryption licenses must be
purchased and activated.

Updating your storage system


For more information about the update procedure that uses the GUI, see Chapter 13,
“Reliability, availability, and serviceability, monitoring and logging, and troubleshooting” on
page 769.

VMware vSphere Virtual Volumes


IBM Spectrum Virtualize can manage VVOLs directly in cooperation with VMware. It enables
VMware virtual machines (VMs) to get the assigned disk capacity directly rather than from the
ESXi data store. This technique enables storage administrators to control the appropriate
usage of storage capacity, and to enable enhanced features of storage virtualization directly
to the VM (such as replication, thin-provisioning, compression, and encryption).

VVOL management is enabled in the System section, as shown in Figure 4-79. The NTP
server must be configured before enabling VVOL management. As a best practice, use the
same NTP server for ESXi and your system.

Figure 4-79 Enabling VVOLs management

Restriction: You cannot enable VVOLs support until the NTP server is configured in the
system.

For more information about VVOLs, see the following publications:


򐂰 Quick-start Guide to Configuring VMware Virtual Volumes for Systems Powered by IBM
Spectrum Virtualize, REDP-5321
򐂰 Configuring VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize,
SG24-8328

Volume protection
Volume protection prevents active volumes or host mappings from being deleted inadvertently
if the system detects recent I/O activity. This global setting is enabled by default on new
systems. You can either set this value to apply to all volumes that are configured on your
system, or control whether the system-level volume protection is enabled or disabled on
specific pools. To prevent an active volume from being deleted unintentionally, administrators
can use the system-wide setting to enable volume protection. They can also specify a period
that the volume must be idle before it can be deleted.

202 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If volume protection is enabled and the period is not expired, the volume deletion fails even if
the -force parameter is used. The system-wide volume protection and the pool-level
protection must both be enabled for protection to be active on a pool. The pool-level
protection depends on the system-level setting to ensure that protection is applied
consistently for volumes within that pool.

If system-level protection is enabled but pool-level protection is not enabled, any volumes in
the pool can be deleted even when the setting is configured at the system level. When you
delete a volume, the system verifies whether it is a part of a host mapping, FlashCopy
mapping, or remote-copy relationship. For a volume that contains these dependencies, the
volume cannot be deleted unless the -force parameter is specified on the corresponding
remove commands. However, the -force parameter does not delete a volume if it has recent
I/O activity and volume protection is enabled. The -force parameter overrides the volume
dependencies, not the volume protection setting.

For more information, see Figure 4-80.

Figure 4-80 Volume protection

IP quorum
IBM Spectrum Virtualize also supports an IP quorum application. By using an IP-based
quorum application as the quorum device for the third site, IBM FICON® is not required. Java
applications run on hosts at the third site.

Chapter 4. IBM Spectrum Virtualize GUI 203


To install the IP quorum device, complete the following steps:
1. If your IBM Storage System is configured for IPv4, click Download IPv4 Application. If it
is configured for IPv6, select Download IPv6 Application. In our example, IPv4 is the
option, as shown in Figure 4-81.

Figure 4-81 IP Quorum settings

2. After you select your correct IP configuration, IBM Spectrum Virtualize generates an IP
Quorum Java application, as shown in Figure 4-82. The application can be saved and
installed in a host that is to run the IP quorum application.

Figure 4-82 IP Quorum Java application

3. After you download the IP quorum application, you must install the application on a
separate host or server.
4. If you change the configuration by adding a node, changing a service IP address, or
changing Secure Sockets Layer (SSL) certificates, you must download and install the IP
quorum application again.
5. On the host, you must use the Java command line to initialize the IP quorum application.
On the server or host on which you plan to run the IP quorum application, create a
separate directory that is dedicated to the IP quorum application.

204 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6. Run the ping command on the host server to verify that it can establish a connection with
the service IP address of each node in the system.
7. Change to the folder where the application is, and run the following command:
java -jar ip_quorum.ja

Note: The IP quorum application always must be running.

8. To verify that the IP quorum application is installed and active, select Settings →
System → IP Quorum. The new IP quorum application is displayed in the table of
detected applications. The system automatically selects MDisks for quorum disks.

An IP quorum application can also act as the quorum device for systems that are configured
with a single-site or standard topology that does not have any external storage configured.
The IP quorum mode is set to Standard when the system is configured for standard topology;
the quorum mode of Preferred or Winner is in effect only if the system topology is not set to
Standard. To change the quorum mode for the IP quorum application, select Quorum Setting
and set the mode to Preferred or Winner. This configuration gives the tie-breaker capability
to a system, and automatically resumes I/O processing if half of the system's nodes or
enclosures are inaccessible. For specific quorum settings, see Figure 4-83.

Figure 4-83 Quorum settings

I/O groups
For ports within an I/O group, you can enable virtualization of FC ports that are used for host
I/O operations. With N_Port ID Virtualization (NPIV), the FC port consists of both a physical
port and a virtual port. When port virtualization is enabled, ports do not come up until they are
ready to handle I/O, which improves host behavior. In addition, path failures due to an offline
node are masked from hosts.

The target port mode on the I/O group indicates the current state of port virtualization:
򐂰 Enabled: The I/O group contains virtual ports that are available to use.
򐂰 Disabled: The I/O group does not contain any virtualized ports.
򐂰 Transitional: The I/O group contains physical FC and virtual ports that are being used. You
cannot change the target port mode directly from Enabled to Disabled states, or vice
versa. The target port mode must be in a transitional state before it can be changed to
Disabled or Enabled states.

Chapter 4. IBM Spectrum Virtualize GUI 205


The system can be in the transitional state for an indefinite period while the system
configuration is changed. However, system performance can be affected because the
number of paths from the system to the host doubled. To avoid increasing the number of
paths substantially, use zoning or other means to temporarily remove some of the paths
until the state of the target port mode is enabled.

The port virtualization settings of I/O groups are available by selecting Settings → System →
I/O Groups, as shown in Figure 4-84.

Figure 4-84 I/O group port virtualization

You can change the status of the port by right-clicking the I/O group and selecting Change
NPIV Settings, as shown in Figure 4-85.

right click

Figure 4-85 Changing NPIV settings

Domain name server


IBM Spectrum Virtualize allows DNS entries to be manually set up in the system. The
information about the DNS servers helps the system to access the DNS servers to resolve
the names of the computer resources that are in the external network.

206 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To view and configure DNS server information in IBM Spectrum Virtualize, complete the
following steps:
1. In the left pane, click the DNS icon and enter the IP address and Name of each DNS
server. IBM Spectrum Virtualize supports up two DNS servers for IPv4 or IPv6 (see
Figure 4-86).

Figure 4-86 DNS information

2. Click Save after you finish entering the DNS server information.

Transparent Cloud Tiering


TCT is a licensed function that enables volume data to be copied and transferred to cloud
storage. The system supports creating connections to cloud service providers (CSPs) to store
copies of volume data in private or public cloud storage.

With TCT, administrators can move older data to cloud storage to free up capacity on the
system. PiT snapshots of data can be created on the system and then copied and stored on
the cloud storage. An external CSP manages the cloud storage, which reduces storage costs
for the system. Before data can be copied to cloud storage, a connection to the CSP must be
created from the system.

A cloud account is an object on the system that represents a connection to a CSP by using a
particular set of credentials. These credentials differ depending on the type of CSP that is
being specified. Most CSPs require the host name of the CSP and an associated password,
and some CSPs also require certificates to authenticate users of the cloud storage.

Public clouds use certificates that are signed by well-known CAs. Private CSPs can use a
self-signed certificate or a certificate that is signed by a trusted CA. These credentials are
defined on the CSP and passed to the system through the administrators of the CSP. A cloud
account defines whether the system can successfully communicate and authenticate with the
CSP by using the account credentials.

If the system is authenticated, it can then access cloud storage to copy data to the cloud
storage or restore data that is copied to cloud storage back to the system. The system
supports one cloud account to a single CSP. Migration between providers is not supported.

Note: Before enabling TCT, consider the following requirements:


򐂰 Ensure that the DNS server is configured on your system and accessible.
򐂰 Determine whether your company’s security policies require enabled encryption. If yes,
ensure that the encryption licenses are properly installed and encryption enabled.

Chapter 4. IBM Spectrum Virtualize GUI 207


The system supports connections to various CSPs. Some CSPs require connections over
external networks, and others can be created on a private network.

Each CSP requires different configuration options. The system supports the following CSPs:
򐂰 IBM Cloud
The system can connect to IBM Cloud, which is a cloud computing platform that combines
platform as a service (PaaS) with infrastructure as a service (IaaS).
򐂰 OpenStack Swift
OpenStack Swift is a standard cloud computing architecture from which administrators
can manage storage and networking resources in a single private cloud environment.
Standard application programming interfaces (APIs) can be used to build customizable
solutions for a private cloud solution.
򐂰 Amazon Simple Storage Service (Amazon S3)
Amazon S3 provides programmers and storage administrators with flexible and secure
public cloud storage. Amazon S3 is also based on Object Storage standards and provides
a web-based interface to manage, back up, and restore data over the web.

To view your IBM Spectrum Virtualize cloud provider settings, from the IBM Storage System
Settings pane, click Settings and select System. Then, select Transparent Cloud Tiering,
as shown in Figure 4-87.

Figure 4-87 Transparent Cloud Tiering settings

By using this view, you can enable and disable features of your TCT and update the system
information concerning your CSP. This pane allows you to set the following options:
򐂰 CSP.
򐂰 Cloud Object Storage URL.
򐂰 The tenant or the container information that is associated to your Cloud Object Storage.
򐂰 User name of the cloud object account.
򐂰 Application programming interface (API) key
򐂰 The container prefix or location of your object.
򐂰 Encryption.
򐂰 Bandwidth.

208 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For more information about how to configure and enable TCT, see 10.3, “Transparent Cloud
Tiering” on page 576.

Automatic Configuration for Virtualization


If you are using this system as external storage for an IBM SAN Volume Controller that uses
FC, this system can be automatically configured following best practices for usage as external
storage behind an SAN Volume Controller.

This process completes the following actions automatically:


򐂰 Creates the appropriate redundant array of independent disks (RAID) arrays based on the
technology type of the drives.
򐂰 Creates a pool for each array.
򐂰 Provisions all capacity in each pool to volumes based on best practices.
򐂰 Maps all volumes to the SAN Volume Controller system for virtualization as MDisks.

Figure 4-88 shows how to enable Automatic Configuration for Virtualization.

Figure 4-88 Automatic Configuration for Virtualization

4.10.6 Support menu


Use the Support pane to configure and manage connections and upload support packages to
the IBM Support Center.

The following options are available from the menu:


򐂰 Call Home
The Call Home feature transmits operational and event-related data to you and IBM
through a Simple Mail Transfer Protocol (SMTP) server connection in the form of an event
notification email. When configured, this function alerts IBM Support personnel about
hardware failures and potentially serious configuration or environmental issues.

Chapter 4. IBM Spectrum Virtualize GUI 209


This view provides the following useful information about email notification and Call Home
information (among others), as shown in Figure 4-89:
– IP of the email server (SMTP server) and port.
– Call Home email address.
– Email of one or more users set to receive one or more email notifications.
– Contact information of the person in the organization that is responsible for the system.

Figure 4-89 Call Home settings

򐂰 Support Assistance
This option enables IBM Support personnel to access the system to complete
troubleshooting and maintenance tasks. You can configure local Support Assistance,
where IBM Support personnel visit your site to fix problems with the system, or Remote
Support Assistance. Both local and Remote Support Assistance use secure connections
to protect data exchange between the IBM Support Center and the system. More access
controls can be added by the system administrator.
򐂰 Support Package
If Support Assistance is configured on your systems, you can automatically or manually
upload new support packages to the IBM Support Center to help analyze and resolve
errors on the system.

The menus are available by selecting Settings → Support, as shown in Figure 4-90.

Figure 4-90 Support menu

210 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For more information about how the Support menu helps with troubleshooting your system or
how to back up your systems, see Chapter 13, “Reliability, availability, and serviceability,
monitoring and logging, and troubleshooting” on page 769.

4.10.7 GUI Preferences menu


The GUI Preferences menu consists of the following options:
򐂰 Login
򐂰 General

Figure 4-91 shows the GUI Preferences selection window.

Figure 4-91 GUI Preferences selection window

Login message
IBM Spectrum Virtualize enables administrators to configure the welcome banner (login
message). This message is a text message that appears in the GUI login window or at the
CLI login prompt.

Chapter 4. IBM Spectrum Virtualize GUI 211


The content of the welcome message is helpful when you need to notify users about some
important information about the system, such as security warnings or a location description.
To define and enable the welcome message by using the GUI, edit the text area with the
message content and click Save (see Figure 4-92).

Figure 4-92 Enabling the login message

The resulting login dialog box is shown in Figure 4-93.

Figure 4-93 Welcome message in the GUI

The banner message also appears in the CLI login prompt window, as shown in Figure 4-94.

Figure 4-94 Welcome message in CLI

212 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
General Settings
With the General Settings menu, you can refresh the GUI cache, set the low graphics mode
option, and enable advanced pools settings.

To configure general GUI preferences, complete the following steps:


1. From the Settings pane, click Settings and select GUI Preferences (see Figure 4-95).

Figure 4-95 General GUI Preferences window

2. You can configure the following elements:


– Clear customizations
This option deletes all GUI preferences that are stored in the browser and restores the
default preferences.
– Default logout time
Measured in minutes of inactivity in the established session.
– IBM Knowledge Center
You can change the URL of IBM Knowledge Center for IBM Spectrum Virtualize.
– Refresh GUI cache
This option causes the GUI to refresh all of its views and clears the GUI cache. The
GUI looks up every object again. This option is useful if a value or object that is shown
in the CLI is not being reflected in the GUI.

– Advanced pool settings


You can select the extent size during storage pool creation.
– Accessibility
Enables low graphics mode when the system is connected through a slower network.

4.11 Additional frequent tasks in the GUI


This section describes additional options and tasks that are available in the GUI of your
system that are frequently used by administrators.

4.11.1 Renaming components


These sections provide guidance about how to rename your system and canisters.

Chapter 4. IBM Spectrum Virtualize GUI 213


Renaming your storage system
All objects in the system have names that are user-defined or system-generated. Choose a
meaningful name when you create an object. If you do not choose a name for the object, the
system generates a name for you.

A well-chosen name serves both as a label for an object and as a tool for tracking and
managing the object. Choosing a meaningful name is important if you decide to use
configuration backup and restore.

When you choose a name for an object, apply the following naming rules:
򐂰 Names must begin with a letter.

Important: Do not start names by using an underscore (_) character even though it is
possible. Using an underscore as the first character of a name is a reserved naming
convention that is used by the system configuration restore process.

򐂰 The first character cannot be numeric.


򐂰 The name can be a maximum of 63 characters, but there are exceptions. The name can
be a maximum of 15 characters for RC relationships and groups. The lsfabric command
displays long object names that are truncated to 15 characters for nodes and systems.
(lsrcrelationshipcandidate or lsrcrelationship commands).
򐂰 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), the
underscore (_) character, a period (.), a hyphen (-), and a space.
򐂰 Names must not begin or end with a space.
򐂰 Object names must be unique within the object type. For example, you can have a volume
that is called ABC and an MDisk called ABC, but you cannot have two volumes that are
called ABC.
򐂰 The default object name is valid (an object prefix with an integer).
򐂰 Objects can be renamed to their current names.

To rename the system from the System pane, complete the following steps:
1. Select Monitoring → System - Overview, and click Actions in the upper left corner of
the pane, as shown in Figure 4-96.

Figure 4-96 Actions menu in the System - Overview pane

2. The Rename System pane opens (see Figure 4-97 on page 215). Specify a new name for
the system and click Rename.

214 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 4-97 Renaming the system

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.

3. If you are certain that you want to change the system name, click Yes.

Warning: When you rename your system, the iSCSI name automatically changes because
it includes the system name by default. Therefore, this change needs more actions on
iSCSI-attached hosts.

Renaming a node canister


To rename a node canister, complete the following steps:
1. Go to the System - Overview window and right-click a node that you want to rename, as
shown in Figure 4-98. Click Rename.

Figure 4-98 Renaming a node on the System - Overview window

Chapter 4. IBM Spectrum Virtualize GUI 215


2. Enter the new name of the node and click Rename (see Figure 4-99).

Figure 4-99 Entering the new name of the node

Warning: Changing the node canister name causes an automatic IQN update and
requires the reconfiguration of all iSCSI-attached hosts.

4.11.2 Working with enclosures


The following section describes how to add or remove expansion enclosure to or from your
IBM Storage System.

Adding an enclosure
After the expansion enclosure is properly attached and powered on, complete the following
steps to activate it in the system:
1. In the System pane that is available from the Monitoring menu, select SAS Chain View.
Only correctly attached and powered on enclosures appear in the window, as shown in
Figure 4-100. The new enclosure is showing as unmanaged, which means it is not part of
the system.

Figure 4-100 Newly detected expansion enclosure

216 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Select the + next to the enclosure that you want to add or click Add Enclosure at the top.
These buttons appear only if there is an unmanaged enclosure that is eligible to be added
to the system. After they are selected, a window opens, on which you need to select the
enclosure you want to add. Expansion enclosures that are directly cabled do not need to
be selected, as shown in Figure 4-101.

Figure 4-101 Adding an enclosure

3. Select Next and then Finish after you are satisfied with your selections. The enclosures
are then added to the system and appear as managed. Instead of the + button, you see a
>, which allows you to view details about the enclosure because it is now part of the
system, as shown in Figure 4-102.

Figure 4-102 Enclosure successfully added

Chapter 4. IBM Spectrum Virtualize GUI 217


Removing an enclosure
The enclosure removal procedure includes its logical detachment from the system by using a
GUI and physically unmounting the systems from the rack. The IBM Storage System guides
you through this process. Complete the following steps:
1. In the System pane that is available from the Monitoring menu, select > next to the
enclosure that you want to remove. The Enclosure Details pane opens. You can then click
Enclosure Actions and select Remove, as shown in Figure 4-103.

Figure 4-103 Selecting an enclosure for removal

2. The system prompts you to remove the enclosure. All disk drives in the removed enclosure
must be in the Unused state. Otherwise, the removal process fails (see Figure 4-104).

Figure 4-104 Confirming the removal

3. After the enclosure is logically removed from the system (set to the Unmanaged state), the
system reminds you about the steps that are necessary for physical removal, such as
power off, uncabling, dismantling from the rack, and secure handling (see Figure 4-105).

Figure 4-105 Enclosure removed

218 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
As part of the enclosure removal process, see your company security policies about how to
handle sensitive data on removed storage devices before they leave the secure data center.
Most companies require data to be encrypted or logically shredded.

4.11.3 Restarting the GUI service


The service that runs that GUI operates from the configuration node. Occasionally, you might
need to restart this service if the GUI is not performing to your expectation (or you cannot
connect). To do this task, complete the following steps:
1. Log in to the Service Assistant and identify the configuration node, as shown in
Figure 4-106.

Shows that this is the


configuration node.

Figure 4-106 Identifying the configuration node on the Service Assistant

Chapter 4. IBM Spectrum Virtualize GUI 219


2. After the process completes, go to Restart Service, as shown in Figure 4-107.

Figure 4-107 Restarting the Tomcat web server

3. Select Web Server (Tomcat). Click Restart, and the web server that runs the GUI
restarts. This task is a concurrent action, but the cluster GUI is unavailable while the
server is restarting (the Service Assistant and CLI are not affected). After 5 minutes, check
to see whether GUI access was restored.

220 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5

Chapter 5. Storage pools


This chapter describes how the storage system manages physical storage resources. All
storage resources that are under system control are managed by using storage pools or
managed disk (MDisk) groups.

Storage pools aggregate internal and external capacity and provide the containers in which
you can create volumes. Storage pools make it easier to dynamically allocate resources,
maximize productivity, and reduce costs.

You can configure storage pools through the management GUI, either during initial
configuration or later. Alternatively, you can configure the storage to your own requirements
by using the command-line interface (CLI).

This chapter includes the following topics:


򐂰 5.1, “Working with storage pools” on page 222
򐂰 5.2, “Working with internal drives and arrays” on page 240
򐂰 5.3, “Working with external controllers and MDisks” on page 264

© Copyright IBM Corp. 2020. All rights reserved. 221


5.1 Working with storage pools
Storage pools act as containers for MDisks, which provide storage capacity to the pool, and
volumes that are provisioned from this capacity, which can be mapped to host systems. The
system organizes storage in this fashion to ease storage management and make it more
efficient.

MDisks can either be redundant array of independent disks (RAID) arrays that are created by
using internal storage, such as drives and flash modules, or logical units (LUs) that are
provided by external storage systems. A single storage pool can contain both types of
MDisks, but a single MDisk can be part of only one storage pool. MDisks themselves are not
visible to host systems.

Figure 5-1 provides an overview of how storage pools, MDisks, and volumes are related.

Figure 5-1 Relationship between MDisks, storage pools, and volumes

All MDisks in a pool are split into chunks of the same size, which are called extents. Volumes
are created from the set of available extents in the pool. The extent size is a property of the
storage pool and cannot be changed after the pool is created. The choice of extent size
affects the total amount of storage that can be managed by the system.

It is possible to add MDisks to an existing pool to provide more usable capacity in the form of
extents. The system automatically balances volume extents between the MDisks to provide
the best performance to the volumes. It is also possible to remove extents from the pool by
deleting an MDisk. The system automatically migrates extents that are in use by volumes to
other MDisks in the same pool to make sure that the data on the extents is preserved.

A storage pool represents a failure domain. If one or more MDisks in a pool become
inaccessible, all volumes (except for image mode volumes) in that pool are affected. Volumes
in other pools are unaffected.

The system supports standard pools (parent pools and child pools) and Data Reduction
Pools (DRPs).

222 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Child pools are created from existing capacity that is assigned to a parent pool instead of
being created directly from MDisks. When the child pool is created, the capacity for a child
pool is reserved from the parent pool. This capacity is no longer reported as available
capacity of the parent pool. In terms of volume creation and management, child pools are
similar to parent pools.

DRPs use a set of techniques that can be used to reduce the amount of usable capacity that
is required to store data, such as compression and deduplication. Data reduction can
increase storage efficiency and performance, and reduce storage costs, especially for flash
storage. DRPs automatically reclaim capacity that is no longer needed by host systems. This
reclaimed capacity is given back to the pool as usable capacity and can be reused by other
volumes. A DRP cannot be used as a parent pool to create a child pool.

For more information about DRP planning and implementation, see Chapter 9, “Advanced
features for storage efficiency” on page 463 and Introduction and Implementation of Data
Reduction Pools and Deduplication, SG24-8430.

In general, you manage storage as follows:


1. Create storage pools (standard or DRP), depending on your requirements and sizing.
2. Assign storage to these pools by using one or more of the following options:
– Create array MDisks from internal drives or flash modules.
– Add MDisks provisioned from external storage systems.
3. Create volumes in these pools and map them to hosts or host clusters.

You manage storage pools either in the Pools pane of the GUI or by using the CLI. To access
the Pools pane, select Pools → Pools, as shown in Figure 5-2.

Figure 5-2 Accessing the Pools pane

The pane lists all storage pools and their major parameters. If a storage pool has child pools,
they are also shown.

To see a list of configured storage pools by using the CLI, run the lsmdiskgrp command
without any parameters, as shown in Example 5-1.

Example 5-1 The lsmdiskgrp output (some columns are not shown)
IBM_Storwize:ITSOV7K:superuser>lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
0 Pool0 online 1 5 821.00GB 1024 771.00GB
1 Pool1 online 1 5 7.21TB 4096 6.14TB

Chapter 5. Storage pools 223


5.1.1 Creating storage pools
To create a storage pool, complete the following steps:
1. Select Pools → MDisks by Pools and click Create Pool, or select Pools → Pools and
click Create, as shown in Figure 5-3.

Figure 5-3 Option to create a storage pool in the Pools pane

Both alternatives open the dialog box that is shown in Figure 5-4.

Figure 5-4 Create Pool dialog box

2. Select the Data reduction check box to create a DRP. Leaving it clear creates a standard
storage pool.

Note: DRPs require careful planning and sizing. Limitations and performance
characteristics of DRPs are different from standard pools.

A standard storage pool that is created by using the GUI has a default extent size of 1 GB.
DRPs have a default extent size of 4 GB. The size of the extents is selected at creation
time and cannot be changed later. The extent size controls the maximum total storage
capacity that is manageable per system (across all pools). For DRPs, the extent size also
controls the maximum capacity after reduction in the pool itself.
For more information about the differences between standard pools and DRPs and for
extent size planning, see Chapter 2, “Planning” on page 77 and Chapter 9, “Advanced
features for storage efficiency” on page 463.

224 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Do not create DRPs with small extent sizes. For more information, see this IBM
Support alert.

When creating a standard pool, you cannot change the extent size by using the GUI by
default. If you want to specify a different extent size, enable this option by selecting
Settings → GUI Preferences → General and checking Advanced pool settings, as
shown in Figure 5-5. Click Save.

Figure 5-5 Advanced pool settings

When the Advanced pool settings option is enabled, you can also select an extent size
for standard pools at creation time, as shown in Figure 5-6.

Figure 5-6 Creating a pool with Advanced pool settings selected

Chapter 5. Storage pools 225


If an encryption license is installed and enabled, you can select whether the storage pool
is encrypted, as shown in Figure 5-7. The encryption setting of a storage pool is selected
at creation time and cannot be changed later. By default, if encryption is enabled,
encryption is selected. For more information about encryption and encrypted storage
pools, see Chapter 12, “Encryption” on page 701.

Figure 5-7 Creating a pool with encryption enabled

3. Enter the name for the pool and click Create.

Naming rules: When you choose a name for a pool, the following rules apply:
򐂰 Names must begin with a letter.
򐂰 The first character cannot be numeric.
򐂰 The name can be a maximum of 63 characters.
򐂰 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
򐂰 Names must not begin or end with a space.
򐂰 Object names must be unique within the object type. For example, you can have a
volume that is named ABC and a storage pool that is called ABC, but not two
storage pools that are both called ABC.
򐂰 The default object name is valid (object prefix with an integer).
򐂰 Objects can be renamed at a later stage.

The new pool is created and is included in the list of storage pools with zero bytes, as shown
in Figure 5-8.

Figure 5-8 Newly created empty pool

226 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To perform this task by using the CLI, run the mkmdiskgrp command. The only required
parameter is the extent size, which is specified by the -ext parameter and must have one of
the following values: 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, or 8192 (MB). To create a
DRP, specify -datareduction yes. The minimum extent size of DRPs is 1024, and attempting
to use a smaller extent size sets the extent size to 1024.

In Example 5-2, the command creates a DRP that is named Pool2 with no MDisks in it.

Example 5-2 The mkmdiskgrp command


IBM_Storwize:ITSOV7K:superuser>mkmdiskgrp -name Pool2 -datareduction yes -ext 8192
MDisk Group, id [2], successfully created

5.1.2 Managed disks in a storage pool


A storage pool is created as an empty container with no storage that is assigned to it. Storage
is then added in the form of MDisks. An MDisk can be either a RAID array from internal
storage (as an array of drives) or a LU from an external storage system. The same storage
pool can include both internal and external MDisks.

Arrays are assigned to storage pools at creation time. Arrays cannot exist outside of a storage
pool and they cannot be moved between storage pools. It is only possible to destroy an array
by removing it from a pool and re-creating it within a new pool.

External MDisks can exist within a pool or outside of a pool. The MDisk object remains on a
system if it is visible from external storage, but its access mode changes depending on
whether it is assigned to a pool or not.

MDisks are managed by using the MDisks by Pools pane. To access the MDisks by Pools
pane, select Pools → MDisks by Pools, as shown in Figure 5-9.

Figure 5-9 MDisks by Pools

The pane lists all the MDisks that are available in the system under the storage pool to which
they belong. Unassigned MDisks are listed separately at the top. Both arrays and external
MDisks are listed. For more information about operations with array MDisks, see 5.2,
“Working with internal drives and arrays” on page 240. To implement a solution with external
MDisks, see 5.3, “Working with external controllers and MDisks” on page 264.

To list all MDisks that are visible by the system by using the CLI, run the lsmdisk command
without any parameters. If required, you can filter output to include only external or only array
type MDisks.

Chapter 5. Storage pools 227


5.1.3 Actions on storage pools
A number of actions can be performed on storage pools. To select an action, select the
storage pool and click Actions, as shown in Figure 5-10. Alternatively, right-click the storage
pool.

Figure 5-10 Pools actions menu

Create Child Pool window


To create a child storage pool, click Create Child Pool. For more information about child
storage pools and a detailed description of this wizard, see 5.1.4, “Child pools” on page 234.
It is not possible to create a child pool from an empty pool or from a DRP.

Rename window
To modify the name of a storage pool, click Rename. Enter the new name and click Rename
in the dialog window.

To do this task by using the CLI, run the chmdiskgrp command. Example 5-3 shows how to
rename Pool2 to StandardStoragePool. If successful, the command returns no output.

Example 5-3 Using chmdiskgrp to rename a storage pool


IBM_Storwize:ITSOV7K:superuser>chmdiskgrp -name StandardStoragePool Pool2
IBM_Storwize:ITSOV7K:superuser>

Modify Threshold window


A warning event is generated when the amount of used capacity in the pool exceeds the
warning threshold. When you use thin-provisioned volumes that auto-expand (automatically
use available extents from the pool), monitor the capacity usage and get warnings before the
pool runs out of free extents so that you can add storage before running out of space.

Note: The warning is generated only the first time that the threshold is exceeded by the
used capacity in the storage pool.

To modify the threshold, select Modify Threshold and enter the new value. The default
threshold is 80%. To disable warnings, set the threshold to 0%.

The threshold is visible in the pool properties and indicated by a red bar, as shown in
Figure 5-11 on page 229.

228 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-11 Pool properties with a warning threshold

To do the task by using the CLI, run the chmdiskgrp command. You can specify the threshold
by using a percentage. You can also set an exact value and specify a unit.

Example 5-4 shows the warning threshold set to 750 GB for Pool0.

Example 5-4 Changing the warning threshold level by using the CLI
IBM_Storwize:ITSOV7K:superuser>chmdiskgrp -warning 750 -unit gb Pool0
IBM_Storwize:ITSOV7K:superuser>

Modify Maximum Overallocation Limit window


If the system contains self-compressing drives (IBM FlashCore Module (FCM) drives) in the
top tier of storage in a pool with multiple tiers and Easy Tier is in use, consider setting an
overallocation limit within these pools.

Easy Tier migrates storage only at a slow rate, which might not keep up with changes to the
compression ratio within the tier. This situation might result in the tier running out of space,
which can cause a loss of access to data until the condition is resolved.

Therefore, the user might specify the maximum overallocation ratio for pools that contain
self-compressing arrays to prevent out-of-space scenarios, as shown in Figure 5-12.

Figure 5-12 Modifying the pool overallocation limit

Chapter 5. Storage pools 229


The value acts as a multiplier of the physically available space in self-compressing arrays.
The allowed values are a percentage 100% (default) - 400%, or off. The default setting
prevents overallocation of new pools. Setting the value to off disables this feature.

On the CLI, run the chmdiskgrp command with the -etfcmoverallocationmax parameter to
set a percentage or use off to disable the limit.

For more information and a more detailed explanation, see Chapter 9, “Advanced features for
storage efficiency” on page 463.

Add Storage to Pool window


This action starts the configuration wizard, which assigns storage to the pool, as shown in
Figure 5-13.

Figure 5-13 Add Storage to Pool wizard

If Internal or Internal Custom is chosen, the system guides you through array MDisk
creation by using internal drives. If External is selected, the system guides you through the
selection of external storage MDisks. If no external storage is attached, the External option is
not shown.

Edit Throttle for Pool window


Click this option to access the window where you set the pool’s throttle configuration.

Throttles can be defined for storage pools to control I/O operations. If a throttle limit is
defined, the system either processes the I/O for that object, or delays the processing of the
I/O. Resources become free for more critical I/O operations.

230 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can use storage pool throttles to avoid overwhelming the back-end storage. Only parent
pools support throttles because only parent pools contain MDisks from internal or external
back-end storage. For volumes in child pools, the throttle of the parent pool is applied.

You can define a throttle for input/output operations per second (IOPS), bandwidth, or both,
as shown in Figure 5-14:
򐂰 IOPS limit indicates the limit of configured IOPS (for both reads and writes combined).
򐂰 Bandwidth limit indicates the bandwidth limit in megabytes per second (MBps). You can
also specify the limit in gigabits per second (Gbps) or terabytes per second (TBps).

Figure 5-14 Edit Throttle for Pool window

If more than one throttle applies to an I/O operation, the lowest and most stringent throttle is
used. For example, if a throttle of 100 MBps is defined on a pool and a throttle of 200 MBps is
defined on a volume of that pool, the I/O operations are limited to 100 MBps.

The throttle limit is a per node limit. For example, if a throttle limit is set for a volume at
100 IOPS, each node on the system that has access to the volume allows 100 IOPS for that
volume. Any I/O operation that exceeds the throttle limit is queued at the receiving nodes. The
multipath policies on the host determine how many nodes receive I/O operations and the
effective throttle limit.

If a throttle exists for the storage pool, the dialog box that is shown in Figure 5-14 also shows
the Remove button that is used to delete the throttle.

To set a storage pool throttle by using the CLI, run the mkthrottle command. Example 5-5
shows a storage pool throttle, named iops_bw_limit, that is set to 3 Mbps and 1000 IOPS on
Pool0.

Example 5-5 Setting a storage pool throttle by using the CLI


IBM_Storwize:ITSOV7K:superuser>mkthrottle -type mdiskgrp -iops 1000 -bandwidth 3
-name iops_bw_limit -mdiskgrp Pool0
Throttle, id [0], successfully created.

Chapter 5. Storage pools 231


To remove a throttle by using the CLI, run the rmthrottle command. The command uses the
throttle ID or throttle name as an argument, as shown in Example 5-6. The command returns
no feedback if it runs successfully.

Example 5-6 Removing a pool throttle by using a CLI


IBM_Storwize:ITSOV7K:superuser>rmthrottle iops_bw_limit
IBM_Storwize:ITSOV7K:superuser>

View All Throttles window


You can display the defined throttles by using the Pools pane. Right-click a pool and select
View all Throttles to display the list of the pool’s throttles. If you want to view the throttle of
other elements (like Volumes or Hosts, for example), you can select All Throttles in the
drop-down list, as shown in Figure 5-15.

Figure 5-15 Viewing all throttles

To see a list of created throttles by using the CLI, run the lsthrottle command. When you
run the command without arguments, it displays a list of all throttles on the system. To list only
storage pool throttles, specify the -filtervalue throttle_type=mdiskgrp parameter.

View Resources window


To browse a list of MDisks that are part of the storage pool, click View Resources, which
opens the window that is shown in Figure 5-16 on page 233.

232 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-16 List of resources in the storage pool

To list storage pool resources by using the CLI, run the lsmdisk command. You can filter the
output to display MDisk objects that belong only to a single MDisk group (storage pool), as
shown in Example 5-7.

Example 5-7 Using lsmdisk (some columns are not shown)


IBM_Storwize:ITSOV7K:superuser>lsmdisk -filtervalue mdisk_grp_name=Pool0
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
0 mdisk0 online array 0 Pool0 7.6TB
16 mdisk1 online array 0 Pool0 2.7TB

Deleting a storage pool


A storage pool can be deleted by using the GUI only if no volumes are associated with it.
Select Delete to delete the pool immediately without any additional confirmation.

If there are volumes in the pool, the Delete option is inactive and cannot be selected. Delete
the volumes or migrate them to another storage pool before proceeding. For more information
about volume migration and volume mirroring, see Chapter 6, “Volumes” on page 277.

After you delete a pool, the following actions occur:


򐂰 All the external MDisks in the pool return to a mode of Unmanaged.
򐂰 All the array mode MDisks in the pool are deleted and all member drives return to a status
of Candidate.

To delete a storage pool by using the CLI, run the rmmdiskgrp command.

Note: Be extremely careful when you run the rmmdiskgrp command with the -force
parameter. Unlike the GUI, it does not prevent you from deleting a storage pool with
volumes. This command deletes all volumes and host mappings on a storage pool, and
they cannot be recovered.

Chapter 5. Storage pools 233


Properties for Pool window
Select Properties to display information about the storage pool. By hovering your cursor over
the elements of the window and clicking [?], you see a short description of each property, as
shown in Figure 5-17.

Figure 5-17 Pool properties and details

To display detailed information about the properties by using the CLI, run the lsmdiskgrp
command with a storage pool name or ID as a parameter, as shown in Example 5-8.

Example 5-8 The lsmdiskgrp output (partially shown)


IBM_Storwize:ITSOV7K:superuser>lsmdiskgrp TieredPool
id 0
name TieredPool
status online
mdisk_count 8
vdisk_count 21
capacity 24.00TB
extent_size 1024
free_capacity 3.90TB
<...>

5.1.4 Child pools


A child pool is a storage pool that is created within another storage pool. The storage pool in
which the child storage pool is created is called the parent storage pool.

Unlike a parent pool, a child pool does not contain MDisks. Its capacity is provided exclusively
by the parent pool in the form of extents. The capacity of a child pool is set at creation time,
but can be modified later nondisruptively. The capacity must be a multiple of the parent pool
extent size and must be smaller than the free capacity of the parent pool. Capacity that is
assigned to a child pool is taken away from the capacity of the parent pool.

234 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A child pool cannot be created from a DRP. Creating a child pool within another child pool is
not possible either.

Child pools are useful when the capacity that is allocated to a specific set of volumes must be
controlled. For example, child pools can be used with VMware vSphere Virtual Volumes
(VVOLs). Storage administrators can restrict access of VMware administrators to only a part
of the storage pool and prevent volumes creation from affecting the rest of the parent storage
pool.

Ownership groups can be used to restrict access to storage resources to a specific set of
users, as described in Chapter 11, “Ownership groups” on page 689.

Child pools can also be useful when strict control over thin-provisioned volume expansion is
needed. For example, you might create a child pool with no volumes in it to act as an
emergency set of extents so that if the parent pool ever runs out of free extents, you can use
the ones from the child pool.

On systems with encryption enabled, child pools can be created to migrate existing volumes
in a non-encrypted pool to encrypted child pools. When you create a child pool after
encryption is enabled, an encryption key is created for the child pool even when the parent
pool is not encrypted. You can then use volume mirroring to migrate the volumes from the
non-encrypted parent pool to the encrypted child pool. Child pools can also be used when a
different encryption key is needed for different sets of volumes.

Child pools inherit most properties from their parent pools, and these properties cannot be
changed. The inherited properties include:
򐂰 Extent size
򐂰 Easy Tier setting
򐂰 Encryption setting, but only if the parent pool is encrypted

Creating a child storage pool


To create a child pool, complete the following steps:
1. Select Pools → Pools, right-click the parent pool that you want to create a child pool from,
and select Create Child Pool, as shown in Figure 5-18.

Figure 5-18 Creating a child pool

Chapter 5. Storage pools 235


2. When the dialog box opens, enter the name and capacity of the child pool and click
Create, as shown in Figure 5-19.

Figure 5-19 Creating a child pool

After the child pool is created, it is listed in the Pools pane under its parent pool. Toggle the
sign to the left of the storage pool icon to either show or hide the child pools, as shown in
Figure 5-20. The capacity that is assigned to the child pools is not usable in the parent pool,
as shown by the gray area on the capacity details bar of the parent pool.

Figure 5-20 Listing parent and child pools

To create a child pool by using the CLI, run the mkmdiskgrp command. You must specify the
parent pool for your new child pool and its size, as shown in Example 5-9. The size is in
megabytes by default (unless the -unit parameter is used) and must be a multiple of the
parent pool’s extent size. In this case, it is 100 * 1024 MB = 100 GB.

Example 5-9 The mkmdiskgrp command to create child pools


IBM_Storwize:ITSOV7K:superuser>mkmdiskgrp -parentmdiskgrp Pool0 -size 102400 -name
Pool0_child0
MDisk Group, id [4], successfully created

Actions for child storage pools


You can rename, resize, and delete a child pool. Also, it is possible to modify its warning
threshold and assign it to an ownership group. To select an action, for example, Resize,
complete the following steps:
1. Right-click the child storage pool, as shown in Figure 5-21 on page 237. Alternatively,
select the storage pool and click Actions.

236 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-21 Actions for child storage pools

2. Select Resize to increase or decrease the capacity of the child storage pool, as shown in
Figure 5-22. Enter the new pool capacity and click Resize.

Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a
child pool must be larger than the capacity that is used by its volumes.

Figure 5-22 Resizing a child pool

When the child pool is shrunk, the system resets the warning threshold and issues a
warning if the threshold is reached.

To rename and resize child pool by using the CLI, run the chmdiskgrp command.
Example 5-10 renames the child pool Pool0_child0 to Pool0_child_new and reduces its size
to 44 GB. If successful, the command returns no feedback.

Example 5-10 Running the chmdiskgrp command to rename a child pool


IBM_Storwize:ITSOV7K:superuser>chmdiskgrp -name Pool0_child_new -size 45056
Pool0_child0
IBM_Storwize:ITSOV7K:superuser>

Chapter 5. Storage pools 237


Deleting a child pool is a task that is like deleting a parent pool. As with a parent pool, the
Delete action is disabled if the child pool contains volumes, as shown in Figure 5-23.

Figure 5-23 Deleting a child pool

After you delete a child pool, the extents that it occupied return to the parent pool as free
capacity.

To delete a child pool by using the CLI, run the rmmdiskgrp command.

To assign an existing ownership group to a child pool, click Manage Ownership Group, as
shown in Figure 5-24. All volumes that are created in the child pool inherit the ownership
group of the child pool. For more information, see Chapter 11, “Ownership groups” on
page 689.

Figure 5-24 Managing the ownership group of a child pool

238 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Migrating volumes to and from child pools
To move a volume to another pool, you can use migration or volume mirroring in the same
way that you use them for parent pools. For more information about volume migration and
volume mirroring, see Chapter 6, “Volumes” on page 277.

The system supports migration of volumes between child pools within the same parent pool
or migration of a volume between a child pool and its parent pool. Migrations between a
source and target child pool with different parent pools are not supported. However, you can
migrate the volume from the source child pool to its parent pool. Then, the volume can be
migrated from the parent pool to the parent pool of the target child pool. Finally, the volume
can be migrated from the target parent pool to the target child pool.

During a volume migration within a parent pool (between a child and its parent or between
children with the same parent), there is no data movement, but there are extent
reassignments.

Volume migration between a child storage pool and its parent storage pool can be performed
by going to the Volumes by Pool page and clicking Volumes. Right-click a volume and select
it to migrate it into a suitable pool.

In the example in Figure 5-25, the volume child_volume was created in child pool
Pool0_child0. The child pools appear exactly like the parent pools in the Volumes by Pool
pane.

Figure 5-25 Actions menu in Volumes by Pool

For more information about the CLI commands for migrating volumes to and from child pools,
see Chapter 6, “Volumes” on page 277.

5.1.5 Encrypted storage pools


The system supports two types of encryption: hardware encryption and software encryption.

Hardware encryption is implemented at an array level, and software encryption is


implemented at a storage pool level. For more information about encryption and encrypted
storage pools, see Chapter 12, “Encryption” on page 701.

Chapter 5. Storage pools 239


5.2 Working with internal drives and arrays
An array is a type of MDisk that is made up of disk drives (or flash modules); these drives are
members of the array. RAID is a method of configuring member drives to create high
availability (HA) and high-performance groupings of drives. The system supports
nondistributed (traditional) and distributed array configurations.

5.2.1 Working with drives


This section describes how to manage internal storage disk drives and configure them to be
used in arrays.

Listing disk drives


The system provides an Internal Storage pane for managing all internal drives. To access the
Internal Storage pane, select Pools → Internal Storage, as shown in Figure 5-26.

Figure 5-26 Internal Storage pane

This pane gives an overview of the internal drives in the system. To display all drives that are
managed in the system, including all I/O groups and expansion enclosures, click All Internal
Storage in the Drive Class filter.

Alternatively, you can filter the drives by their type or class. For example, you can choose to
show only enterprise drives, nearline (NL) drives, or flash drives. Select the class on the left
side of the pane to filter the list and display only the drives of the selected class.

You can find information about the capacity allocation of each drive class in the upper right
corner, as shown in Figure 5-26:
Assigned to MDisks Shows the storage capacity of the selected drive class that
is assigned to MDisks.
Assigned to Spares Shows the storage capacity of the selected drive class that
is used for spare drives.

240 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Available Shows the storage capacity of the selected drive class that
has not yet been assigned to either MDisks or Spares.
Total Written Capacity Limit Shows the total amount of storage capacity of the drives in
the selected class.

If All Internal Storage is selected under the Drive Class filter, the values that are shown refer
to the entire internal storage.

The percentage bar indicates how much of the total written capacity limit is assigned to
MDisks and spares. MDisk capacity is represented by the solid portion, and spare capacity by
the shaded portion of the bar.

To list all internal drives that are available in the system, run the lsdrive command. If needed,
you can filter output to list only drives that belong to particular enclosure, that have specific
capacity, or by other attributes. For an example, see Example 5-11.

Example 5-11 The lsdrive output (some lines and columns are not shown)
IBM_Storwize:ITSOV7K:superuser>lsdrive
id status error_sequence_number use tech_type capacity mdisk_id
0 online spare tier_enterprise 1.1TB
1 online spare tier_enterprise 1.1TB
2 online member tier_nearline 931.0GB 16
3 online candidate tier_enterprise 1.1TB
4 online candidate tier_enterprise 1.1TB
5 online member tier_nearline 931.0GB 16
<...>

The drive list shows the Status of each drive. A drive can be Online, which means that the
drive is fully accessible by both nodes in the I/O group. A Degraded drive is only accessible by
one of the two nodes. A drive status of Offline indicates that the drive is not accessible by
any of the nodes, for example, because it was physically removed from the enclosure or it is
unresponsive or failing.

The drive Use attribute describes the role that it plays in the system. The values and meanings
are:
Unused The system has access to the drive but was not told to take ownership
of it. Most actions on the drive are not permitted. This state is a safe
state for newly added hardware.
Candidate The drive is owned by the system, and is not part of the RAID
configuration. It is available to be used in an array MDisk.
Spare The drive is a hot spare protecting nondistributed (traditional) RAID
arrays. If any member of such an array fails, a spare drive is taken and
becomes a Member for rebuilding the array.
Member The drive is part of a RAID array.
Failed The drive is owned by the system and was diagnosed as faulty. It is
waiting for a service action.

Chapter 5. Storage pools 241


The Use attribute can change to different values, but not all changes are valid, as shown in
Figure 5-27.

Figure 5-27 Drive use changes

The system automatically sets the Use to Member when it creates a RAID array. Changing Use
from Member to Failed is possible only if the array does not depend on the drive, and
additional confirmation is required when taking a drive offline when no spare is available.
Changing a Candidate drive to Failed is possible only by using the CLI.

Note: To start configuring arrays in a new system, all Unused drives must be configured as
Candidates. The Initial Setup or Assign Storage wizards do that automatically.

A number of actions can be performed on internal drives. To perform any action, select one or
more drives and right-click the selection, as shown in Figure 5-28. Alternatively, select the
drives and click Actions.

Figure 5-28 Actions for internal storage

The actions that are available in the drop-down menu depend on the status and usage of the
drive or drives that are selected. Some actions can be performed only on drives in a certain
state, and some are possible only when a single drive is selected.

Action: Fix Error


This action is available only if the drive that is selected has an error event that is associated
with it. Select Fix Error to start the directed maintenance procedure (DMP) for the selected
drive. For more information about DMPs, see Chapter 13, “Reliability, availability, and
serviceability, monitoring and logging, and troubleshooting” on page 769.

Action: Take Offline


If a problem is identified with a specific drive, you can select Take Offline to take the drive
offline. You must confirm the action, as shown in Figure 5-29 on page 243.

242 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-29 Taking a drive offline

The system prevents you from taking the drive offline if one of the following conditions is true:
򐂰 Taking the drive offline results in a loss of access to data.
򐂰 The first option was selected and no suitable spares are available to start a rebuild.

If a spare is available and the drive is taken offline, the associated MDisk remains Online and
the RAID array starts a rebuild by using a suitable spare. If no spare is available and the drive
is taken offline by using the second option, the status of the associated MDisk becomes
Degraded. The status of the storage pool to which the MDisk belongs becomes Degraded too.

A drive that is taken offline is considered Failed, as shown in Figure 5-30.

Figure 5-30 An offline drive is marked as Failed

To take a drive offline by using the CLI, run the chdrive command, as shown in
Example 5-12. This command returns no feedback. Use the -allowdegraded parameter to set
a member drive offline even if no suitable spare is available.

Example 5-12 Setting a drive offline by using the CLI


IBM_Storwize:ITSOV7K::superuser>chdrive -use failed 3
IBM_Storwize:ITSOV7K::superuser>

Chapter 5. Storage pools 243


The system prevents you from taking a drive offline if the RAID array depends on that drive
and doing so would result in a loss of access to data, as shown in Figure 5-31.

Figure 5-31 Taking a drive offline fails if it would result in a loss of access to data

Action: Mark as
Select Mark as to change the use that is assigned to the drive, as shown in Figure 5-32. The
list of available options depends on the current drive use and state. For more information, see
the allowed state transitions that are shown in Figure 5-32.

Figure 5-32 A drive can be marked as Unused, Candidate, or Spare

To change the drive role by using the CLI, run the chdrive command, as shown in
Example 5-13. It shows that the drive that was set offline by a previous command is set as a
spare. The drive cannot go from Failed to Spare in one step. Instead, the drive must be
assigned to a Candidate role before it set to the Spare role.

Example 5-13 Changing the drive role by using the CLI


IBM_Storwize:ITSOV7K:superuser>lsdrive -filtervalue status=offline
id status error_sequence_number use tech_type capacity mdisk_id
3 offline failed tier_enterprise 1.1TB
IBM_Storwize:ITSOV7K:superuser>chdrive -use spare 3
CMMVC6537E The command cannot be initiated because the drive that you have
specified has a Use property that is not supported for the task.
IBM_Storwize:ITSOV7K:superuser>chdrive -use candidate 3
IBM_Storwize:ITSOV7K:superuser>chdrive -use spare 3
IBM_Storwize:ITSOV7K:superuser>

244 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Marking a compressed drive to the Candidate role causes the drive to perform a
format. The format must complete before the drive goes online and is available for use.

Action: Identify
Select Identify to turn on the light-emitting diode (LED) light of the enclosure slot of the
selected drive. With this action, you can easily find a drive that must be replaced or that you
want to troubleshoot. A dialog box opens so that you can confirm that the LED was turned on,
as shown in Figure 5-33.

Figure 5-33 Identifying an internal drive

Your action makes an amber LED flash (turn on and off continuously) for the drive that you
want to identify.

Click Turn LED Off when you are finished. The LED returns to its initial state.

On the CLI, run the chenclosureslot command to turn on the LED. Example 5-14 shows the
commands to find the enclosure and slot for drive 21 and to turn on and off the identification
LED of slot 3 in enclosure 1.

Example 5-14 Changing a slot LED to identification mode by using the CLI
IBM_Storwize:ITSOV7K:superuser>lsdrive 21
id 21
<...>
enclosure_id 1
slot_id 3
<...>
IBM_Storwize:ITSOV7K:superuser>chenclosureslot -identify yes -slot 3 1
IBM_Storwize:ITSOV7K:superuser>lsenclosureslot -slot 3 1
enclosure_id 1
slot_id 3
fault_LED slow_flashing
powered yes
drive_present yes
drive_id 21
IBM_Storwize:ITSOV7K:superuser>chenclosureslot -identify no -slot 3 1

Chapter 5. Storage pools 245


Action: Upgrade
Select Upgrade to update the drive firmware, as shown in Figure 5-34. You can choose to
update an individual drive, selected drives, or all the drives in the system.

Figure 5-34 Upgrading a drive or a set of drives

For information about updating the drive firmware, see Chapter 13, “Reliability, availability,
and serviceability, monitoring and logging, and troubleshooting” on page 769.

Action: Dependent Volumes


Select Dependent Volumes to list the volumes that depend on the selected drives. A volume
depends on a drive or a set of drives when removal or failure of that drive or set of drives
results in a loss of access or a loss of data for that volume. Use this option before you do
maintenance operations to confirm which volumes (if any) will be affected.

Figure 5-35 shows the list of volumes that depend on a set of three drives that belong to the
same MDisk.

Figure 5-35 List of volumes that depend on disks 7, 8, and 9

246 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
All listed volumes go offline if all selected drives go offline concurrently. This does not mean
that volumes go offline if a single drive or two of the three drives go offline.

Whether there are dependent volumes depends on the redundancy of the RAID array at a
certain point. The redundancy is based on the RAID level, state of the array, and state of the
other member drives in the array. For example, it takes three or more drives going offline
concurrently in a healthy RAID 6 array to have dependent volumes.

Note: A lack of dependent volumes does not imply that there are no volumes that use the
drive. Volume dependency shows the list of volumes that become unavailable if the drive or
the set of selected drives becomes unavailable.

You can get the same information by running the lsdependentvdisks command. Use the
parameter -drive with the list of drive IDs that you are checking, separated with a colon (:), as
shown in Example 5-15.

Example 5-15 Listing volumes that depend on drives


IBM_Storwize:ITSOV7K:superuser>lsdependentvdisks -drive [Link]
vdisk_id vdisk_name
0 vdisk0
1 vdisk1
2 vdisk2
3 vdisk3
4 vdisk4
5 vdisk5
6 vdisk6

Chapter 5. Storage pools 247


Action: Properties
Select Properties to view more information about the drive, as shown in Figure 5-36.

Figure 5-36 Drive properties

You can find a short description of each drive property by hovering your cursor over it and
clicking [?]. You can also display drive slot details by clicking the Drive Slot tab.

To get all available information about the particular drive, run the lsdrive command with the
drive ID as the parameter. To get slot information, run the lsenclosureslot command.

5.2.2 RAID and distributed redundant array of independent disks


To use internal disks in storage pools, you must join them as RAID arrays to form array mode
MDisks.

RAID provides two key design goals:


򐂰 Increased data reliability
򐂰 Increased input/output (I/O) performance

Introduction to RAID technology


RAID technology can provide better performance for data access, HA for the data, or a
combination of both. RAID levels define a tradeoff between HA, performance, and cost.

248 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When multiple physical disks are set up to use RAID, they are in a RAID array. The system
provides multiple, traditional RAID levels:
򐂰 RAID 0
򐂰 RAID 1
򐂰 RAID 10

Note: RAID 5 and RAID 6 are available only in distributed redundant array of independent
disks (DRAID) configurations. Self-compressing drives support only DRAID 5 and DRAID
6. Storage-class memory (SCM) drives support only traditional RAID 10 with optional
spares and DRAID 5.

RAID 0 does not provide any redundancy. A single drive failure in a RAID 0 array causes
data loss.

In a traditional RAID approach, data is spread among up to 16 drives in an array. There are
separate spare drives that do not belong to an array, and they can potentially protect multiple
arrays. When one of the drives within the array fails, the system rebuilds the array by using a
spare drive.

For example, in RAID 10 all data is read from the mirrored copy and then written to a spare
drive. The spare becomes a member of the array when the rebuild starts. After the rebuild is
complete and the failed drive is replaced, a member exchange is performed to add the
replacement drive to the array and restore the spare to its original state so it can act as a hot
spare again for another drive failure in the future.

During a rebuild of a traditional RAID array, writes are submitted to a single spare drive, which
can become a bottleneck and might impact I/O performance. With increasing drive capacity,
the rebuild time increases significantly. Additionally, the probability of a second failure during
the rebuild process also becomes more likely. Outside of any rebuild activity, the spare drives
are idle and do not process I/O requests for the system.

DRAID addresses these shortcomings.

Distributed redundant array of independent disks


In DRAID, there are no dedicated spare drives that are idle most of the time. All 4 - 128 drives
in the array process I/O requests always, which improves the overall I/O performance. Spare
capacity is spread across all member drives to form one or more rebuild areas. During a
rebuild, the write workload is distributed across all drives, removing the single drive bottleneck
of traditional arrays.

Using this approach, DRAID reduces the rebuild time, the impact on I/O performance during
the rebuild, and the probability of a second failure during the rebuild. Like traditional RAID, a
DRAID 6 array can tolerate two drive failures and survive. If another drive fails in the same
array before the array is rebuilt, the MDisk and the storage pool go offline. In other words,
DRAID has the same redundancy characteristics as traditional RAID.

A rebuild after a drive failure reconstructs the data on the failed drive and distributes it across
all drives in the array by using a rebuild area. After the failed drive is replaced, a copyback
process copies the data to the replacement drive and to free the rebuild area so that it can be
used for another drive failure in the future.

The following DRAID types are available:


򐂰 DRAID 5
򐂰 DRAID 6

Chapter 5. Storage pools 249


Note: SCM drives support only DRAID 5 and traditional RAID 10 with optional spares.

Figure 5-37 shows an example of a DRAID 6 with 10 disks. The capacity on the drives is
divided into many packs. The reserved spare capacity (marked in yellow) is equivalent to two
spare drives, but the capacity is distributed across all of the drives (depending on the pack
number) to form two rebuild areas. The data is striped like a traditional RAID array, but the
number of drives in the array can be larger than the stripe width.

Figure 5-37 DRAID 6 (for simplification, not all packs are shown)

Figure 5-38 on page 251 shows what happens after a single drive failure in this DRAID 6.
Drive 3 failed and the array is using half of the spare capacity in each pack (marked in green)
to rebuild the data of the failed drive. All drives are involved in the rebuild process, which
reduces the rebuild time. One of the two distributed rebuild areas is in use, but the second
rebuild area can be used to rebuild the array once more after another failure.

250 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-38 Single drive failure with DRAID 6 (for simplification, not all packs are shown)

After the rebuild completes, the array can sustain two more drive failures even before drive 3
is replaced. If no rebuild area is available to perform a rebuild after another drive failure, the
array becomes Degraded until a rebuild area is available again and the rebuild can start.

After drive 3 is replaced, a copyback process copies the data from the occupied rebuild area
to the replacement drive to empty the rebuild area and make sure that it can be used again for
a new rebuild.

DRAID addresses the main disadvantages of traditional RAID while providing the same
redundancy characteristics:
򐂰 In a drive failure, data is read from many drives and written to many drives. This process
minimizes the impact on performance during the rebuild process. Also, it reduces rebuild
time. Depending on the distributed array configuration and drive sizes, the rebuild process
can be up to 10 times faster.
򐂰 Spare space is distributed throughout the array, which means more drives are processing
I/O and no dedicated spare drives are idling.

The DRAID implementation has the following extra advantages:


򐂰 Arrays can be much larger than before and can span many more drives, which improves
the performance of the array. The maximum number of drives a DRAID can contain is 128.
򐂰 Existing distributed arrays can be expanded by adding one or more drives. Traditional
arrays cannot be expanded.
򐂰 Distributed arrays use all the node CPU cores to improve performance, especially in
configurations with few arrays.

Chapter 5. Storage pools 251


Here are the minimum number of drives that are needed to build a DRAID array:
򐂰 Six drives for a DRAID 6 array
򐂰 Four drives for a DRAID 5 array

5.2.3 Creating arrays


Only RAID arrays (array mode MDisks) can be added to a storage pool. It is not possible to
add Just a Bunch of Disks (JBOD) or a single drive. It is also not possible to create a RAID
array without assigning it to a storage pool.

Note: As a best practice, use DRAID 6 whenever possible. DRAID technology dramatically
reduces rebuild times, decreases the exposure of volumes to the extra load of recovering
redundancy, and improves performance.

To create a RAID array from internal storage, right-click the storage pool that you want to add
and select Add Storage, as shown in Figure 5-39.

Figure 5-39 Adding storage to a pool

This action starts the configuration wizard that is shown in Figure 5-40. If any of the drives
have the Unused role, reconfigure them as Candidates to be included into configuration.

If Internal or Internal Custom is chosen, then the system guides you through array MDisk
creation. If External is selected, the system guides you through the selection of external
storage.

Figure 5-40 Assigning storage to a pool

252 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The Add Storage dialog box provides the following two options to configure internal arrays:
򐂰 Select Quick → Internal: The system automatically chooses the recommended RAID
configuration for drives.
򐂰 Select Advanced → Internal Custom: This option provides more flexibility because you
can choose the RAID parameters manually.

Quick internal configuration


When you select Internal, the system automatically recommends the RAID type (traditional
or distributed) and level (such as RAID 6), number of drives, stripe width, and the number of
rebuild areas (or spares for traditional RAID) for each drive class. The number of drives and
the stripe width can be adjusted before the array is created. Depending on the adjustments
that are made, the system might select a different RAID type and level. A summary view can
be expanded to preview the details of the arrays that are going to be created.
.

Note: It is not possible to change the RAID level or stripe width of an existing array. You
also cannot change the drive count of a traditional array. If you need to change these
properties, you must delete the array MDisk and re-create it with the required settings.

In Figure 5-41, the dialog box suggests that you create one DRAID 6 array with all ten 10-K
enterprise drives by using a stripe width of 9 and to create a DRAID 5 array with a stripe width
of 4 by using the five NL drives. There is an insufficient number of 15-K enterprise drives to
create an array.

Figure 5-41 Assign Storage to Pool dialog box

If you select two drives only, the system automatically updates the recommendation to a
RAID 1 array. For more control of the array creation steps, you can select the Internal
Custom option.

By default, if sufficient candidate drives are available, the system might recommend traditional
arrays. However, switch to DRAID when possible, by using the Internal Custom option.

Chapter 5. Storage pools 253


If the system has multiple drive classes (for example, flash and enterprise drives), the default
option is done by completing the following steps:
1. Create multiple arrays of different tiers.
2. Assign them to the pool to take advantage of the Easy Tier function.

However, you can adjust this configuration by setting the number of drives of different classes
to zero. For more information about Easy Tier, see Chapter 9, “Advanced features for storage
efficiency” on page 463.

If you are adding storage to a pool that already has storage that is assigned to the pool, the
existing storage configuration is considered for the recommendation. The system aims to
achieve a balanced configuration, so some properties are inherited from existing arrays in the
pool for a specific drive class. It is not possible to add RAID arrays that are different from
existing arrays in a pool by using the GUI.

For example, if the pool has an existing DRAID 6 array of 16 drives, you cannot add a
two-drive RAID 1 array to the same pool because this configuration creates an imbalanced
storage pool. You can still add any array of any configuration to an existing pool by using the
CLI.

When you are satisfied with the configuration, click Assign. The RAID arrays are created,
added as array mode MDisks to the pool, and initialized in the background.

If you used self-compressing drives to create the array, the system might prompt you to
modify the overallocation limit of the pool. For more information, see “Modify Maximum
Overallocation Limit window” on page 229.

You can monitor the progress of the initialization by selecting the corresponding task under
Running Tasks in the upper-right corner of the GUI, as shown in Figure 5-42. The array is
available for I/O during this process, so you do not need to wait for it to complete.

Figure 5-42 Array Initialization task

Click View in the Running Tasks list to see the initialization progress and the time remaining,
as shown in Figure 5-43 on page 255. The time that it takes to initialize an array depends on
the type of drives that is in it. For example, an array of flash drives is much quicker to initialize
than NL-serial-attached Small Computer System Interface (SCSI) (SAS) drives.

254 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-43 Array Initialization task progress information

Advanced Custom configuration


To customize the configuration of MDisks that are composed of internal drives, select Internal
Custom. The following values can be customized:
򐂰 RAID type and level
򐂰 Number of rebuild areas (or spares)
򐂰 Array width
򐂰 Stripe width
򐂰 Number of drives of each class

Figure 5-44 shows an example with 10 drives that are ready to be configured as DRAID 6,
with two rebuild areas that are distributed over all drives as spare capacity.

Figure 5-44 Adding internal storage to a pool by using the Advanced option

To return to the default settings, click Refresh next to the pool capacity. To create and assign
the arrays, click Assign.

Unlike automatic assignment, a custom internal configuration does not create multiple arrays.
You must create each array separately.

Chapter 5. Storage pools 255


As with automatic assignment, the system does not allow you to add different arrays to an
existing populated pool. The goal is to achieve balance across MDisks inside one pool. You
can still add any array of any configuration to an existing pool by using the CLI.

Note: Spare drives are not assigned when traditional RAID arrays are created by using the
Internal Custom configuration wizard. You must set them up manually.

Configuring arrays with the CLI


When you work with the CLI, run the mkarray command to create traditional RAID arrays, and
run the mkdistributedarray command to create DRAID arrays. First, retrieve a list of drives
that are ready to become array members. To find information about how to list all available
drives, read their use roles, and change those use roles, see 5.2.1, “Working with drives” on
page 240.

To get the recommended array configuration by using the CLI, run the lsdriveclass
command to list the available drive classes, and then use lsarrayrecommendation
commands, as shown in Example 5-16. The recommendations are listed in the order of
preference.

Example 5-16 Listing array recommendations by using the CLI


IBM_Storwize:ITSOV7K:superuser>lsdriveclass
id RPM capacity tech_type block_size candidate_count
0 10000 1.1TB tier_enterprise 512 10
1 7200 931.0GB tier_nearline 512 5
2 15000 136.2GB tier_enterprise 512 1
IBM_Storwize:ITSOV7K:superuser>lsarrayrecommendation -driveclass 0 -drivecount 10
Pool2
raid_level distributed stripe_width rebuild_areas drive_count array_count capacity
RAID 6 yes 9 1 10 1 7.6TB
RAID 6 no 10 0 10 1 8.7TB
RAID 5 yes 9 1 10 1 8.7TB
RAID 5 no 9 0 9 1 8.7TB
RAID 10 no 8 0 8 1 4.4TB
RAID 1 no 2 0 2 5 5.5TB

To create the recommended DRAID 6 array, specify the RAID level, drive class, number of
drives, stripe width, number of rebuild areas, and the storage pool. The system automatically
chooses drives for the array from the available drives in the class. In Example 5-17, you
create a DRAID 6 array out of 10 drives of class 0 by using a stripe width of 9 and a single
rebuild area, and you add it to Pool2.

Example 5-17 Creating a DRAID by running the mkdistributedarray command


IBM_Storwize:ITSOV7K:superuser>mkdistributedarray -level RAID 6 -driveclass 0
-drivecount 10 -stripewidth 9 -rebuildareas 1 Pool2
MDisk, id [0], successfully created

There are default values for the stripe width and the number of rebuild areas, which depend
on the RAID level and the drive count. In this example, you had to specify the stripe width
because for DRAID 6 it is 12 by default. The drive count value must equal or be greater than
the sum of the stripe width and the number of rebuild areas.

256 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To create a RAID 10 MDisk instead, you must specify a list of drives that you want to add as
members, the RAID level, and the storage pool name or ID to which you want to add this
array.

Example 5-18 creates a RAID 10 array and adds it to Pool2. It also designates a spare drive.

Example 5-18 Creating a RAID by running the mkarray command


IBM_Storwize:ITSOV7K:superuser>mkarray -level RAID 10 -drive [Link] Pool2
MDisk, id [0], successfully created
IBM_Storwize:ITSOV7K:superuser>chdrive -use spare 8

Note: Do not forget to designate some of the drives as spares when creating traditional
arrays. Spare drives are required to perform a rebuild immediately after a drive failure.

The storage pool must exist. To create a storage pool, see 5.1.1, “Creating storage pools” on
page 224. To check the array initialization progress by using the CLI, run the
lsarrayinitprogress command.

5.2.4 Actions on arrays


MDisks that are created from internal storage support specific actions that external MDisks
do not support. Some actions that are supported on traditional RAID arrays are not supported
on DRAID arrays, and vice versa.

To select an action, select Pools → MDisks by Pools, select the array (MDisk), and click
Actions. Alternatively, right-click the array, as shown in Figure 5-45.

Figure 5-45 Actions on arrays

Rename
To change the name of an MDisk, select this option.

The CLI command for this operation is charray, as shown in Example 5-19. No feedback is
returned.

Example 5-19 Renaming an array MDisk by running the charray command


IBM_Storwize:ITSOV7K:superuser>charray -name Distributed_array mdisk1
IBM_Storwize:ITSOV7K:superuser>

Chapter 5. Storage pools 257


Swap Drive
To replace a drive in the array with another drive, select Swap Drive. The other drive must
have the Candidate or Spare role. Use this action to perform proactive drive replacement or
replace a drive that has not failed but is expected to fail soon, for example, as indicated by an
error message in the event log.

Figure 5-46 shows the dialog box opens. Select the member drive that you want to replace
and the replacement drive, and click Swap.

Figure 5-46 Swapping an array member with a candidate or spare drive

The exchange of the drives runs in the background. The volumes on the affected MDisk
remain accessible during the process.

Swapping a drive in a traditional array performs a concurrent member exchange, which does
not reduce the redundancy of the array. The data of the old member is copied to the new
member, and after the process is complete, the old member is removed from the array.

In a distributed array, the system immediately removes the old member from the array and
performs a rebuild. After the rebuild completes, a copyback is initiated to copy the data to the
new member drive. This process is non-disruptive, but reduces the redundancy of the array
during the rebuild process.

You can run the charraymember command to do this task. Example 5-20 shows the
replacement of array member ID 7 that was assigned to drive ID 12 with drive ID 17. The
-immediate parameter is required for distributed arrays to acknowledge that a rebuild will
start.

Example 5-20 Replacing an array member by using the CLI (some columns are not shown)
IBM_Storwize:ITSOV7K:superuser>lsarraymember 16
mdisk_id mdisk_name member_id drive_id new_drive_id spare_protection
16 Distributed_array 6 18 1
16 Distributed_array 7 12 1
16 Distributed_array 8 15 1
<...>
IBM_Storwize:ITSOV7K:superuser>lsdrive
id status error_sequence_number use tech_type capacity mdisk_id
16 online member tier_enterprise 558.4GB 16
17 online spare tier_enterprise 558.4GB
18 online member tier_enterprise 558.4GB 16
<...>
IBM_Storwize:ITSOV7K:superuser>charraymember -immediate -member 7 -newdrive 17
Distributed_array
IBM_Storwize:ITSOV7K:superuser>

258 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Set Spare Goal or Set Rebuild Areas Goal
Select this option to set the number of spare drives (on a RAID) or rebuild areas (on a DRAID)
that are expected to protect the array from drive failures.

If the number of rebuild areas that are available does not meet the configured goal, an error is
logged in the event log, as shown in Figure 5-47. This error can be fixed by replacing failed
drives in the DRAID array.

Figure 5-47 Error 1690 for insufficient rebuild areas available

Note: This option does not change the actual number of rebuild areas or spares that are
available to the array, but only specifies at which point a warning event is generated.
Setting the goal to 0 does not prevent the array from rebuilding.

On the CLI, this task is performed with the charray command (see Example 5-21).

Example 5-21 Adjusting array goals by running the charray command (some columns are not shown)
IBM_Storwize:ITSOV7K:superuser>lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name distributed
0 mdisk0 online 0 mdiskgrp0 no
16 Distributed_array online 1 mdiskgrp1 yes
IBM_Storwize:ITSOV7K:superuser>charray -sparegoal 2 mdisk0
IBM_Storwize:ITSOV7K:superuser>charray -rebuildareasgoal 2 Distributed_array

Expand
Select Expand to expand the array by adding more drives to it to increase the available
capacity of the array or create more rebuild areas. Only distributed arrays can be expanded
because the option is not available for traditional arrays.

Chapter 5. Storage pools 259


Candidate drives of a drive class that is compatible with the drive class of the array must be
available in the system or an error message is shown and the array cannot be expanded. A
drive class is compatible with another one if its characteristics, such as capacity and
performance, are an exact match or are superior. In most cases, drives of the same class
should be used to expand an array.

The dialog box that is shown in Figure 5-48 shows an overview of the size of the array, the
number of available candidate drives in the selected drive class, and the new array capacity
after the expansion. The drive class and the number of drives to add can be modified as
required and the projected new array capacity is updated. To add more rebuild areas to the
array, click Advanced Settings and modify the number of extra spares.

Figure 5-48 Expanding a distributed array

Clicking Expand starts a background process that adds the selected number of drives to the
array. As part of the expansion, the system automatically migrates data for optimal
performance for the new expanded configuration.

You can monitor the progress of the expansion by clicking the Running Tasks icon in the
upper-right corner of the GUI or by selecting Monitoring → Background tasks as shown in
Figure 5-49.

Figure 5-49 Array expansion progress

Note: When you expand a thin-provisioned Non-Volatile Memory Express (NVMe) array,
the physical capacity is not immediately available, and the availability of new physical
capacity is not tracked with logical expansion progress.

260 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
On the CLI, this task is performed by running the expandarray command. To get a list of
compatible drive classes, run the lscompatibledriveclasses command, as shown in
Example 5-22.

Example 5-22 Expanding an array by using the CLI


IBM_Storwize:ITSOV7K:superuser>lsarray 0
<..>
capacity 3.2TB
<..>
drive_class_id 0
drive_count 6
<..>
rebuild_areas_total 1
IBM_Storwize:ITSOV7K:superuser>lscompatibledriveclasses 0
id
0
IBM_Storwize:ITSOV7K:superuser>expandarray -driveclass 0 -totaldrivecount 10
-totalrebuildareas 2 0
IBM_Storwize:ITSOV7K:superuser>lsarrayexpansionprogress
progress estimated_completion_time target_capacity additional_capacity_remaining
29 191018233758 5.17TB 1.38TB

Note: The expandarray command uses the total drive count after the expansion as a
parameter, including both the number of new drives and the number of drives in the array
before the expansion. The same is true for the number of rebuild areas.

Delete
Select Delete to remove the array from the storage pool and delete it. An array MDisk does
not exist outside of a storage pool. Therefore, an array cannot be removed from the pool
without being deleted. All drives that belong to the deleted array take on the Candidate role.

If there are no volumes that use extents from this array, the command runs immediately
without additional confirmation. If there are volumes that use extents from this array, you are
prompted to confirm the action, as shown in Figure 5-50.

Figure 5-50 Deleting an array from a non-empty storage pool

Confirming the deletion starts a background process that migrates used extents on the MDisk
to other MDisks in the same storage pool. After that process completes, the array is removed
from the storage pool and deleted.

Chapter 5. Storage pools 261


Note: The command fails if you do not have enough available capacity remaining in the
storage pool to allocate the capacity that is being migrated away from the removed array.

To delete the array with the CLI, run the rmarray command. The -force parameter is required
if volume extents must be migrated to other MDisks in a storage pool.

To monitor the progress of the migration, use the Running Tasks section in the GUI or the
lsmigrate command on the CLI. The MDisk continues to exist until the migration completes.

Dependent Volumes
A volume depends on an MDisk if the MDisk becoming unavailable results in a loss of access
or a loss of data for that volume. Use this option before you do maintenance operations to
confirm which volumes (if any) will be affected.

If an MDisk in a storage pool goes offline, the entire storage pool goes offline, which means
all volumes in a storage pool depend on each MDisk in the same pool, even if the MDisk does
not have extents for each of the volumes. Clicking the Dependent Volumes Action menu of
an MDisk lists the volumes that depend on that MDisk, as shown in Figure 5-51.

Figure 5-51 Dependent volumes for MDisk mdisk0

You can get the same information by running the lsdependentvdisks command, as shown in
Example 5-23.

Example 5-23 Listing virtual disks that depend on a MDisk by using the CLI
IBM_Storwize:ITSOV7K:superuser>lsdependentvdisks -mdisk mdisk0
vdisk_id vdisk_name
0 ITSO-SRC01
1 ITSO-TGT01

262 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2 ITSO-ApplDB01
<...>

Drives
To see information about the member drives that are included in the array, select Drives, as
shown in Figure 5-52.

Figure 5-52 List of drives in an array

You can get the same information by running the lsarraymember command. Provide an array
name or ID as the parameter to filter the output from the array. If you run the command
without arguments, the command lists all members of all configured arrays.

Properties
This section shows all the available array MDisk parameters: its state, capacity, RAID level,
and others.

To get a list of all configured arrays, run the lsarray command with the array name or ID as
the parameter to get more information about the array, as shown in Example 5-24.

Example 5-24 The lsarray command output (truncated)


IBM_Storwize:ITSOV7K:superuser>lsarray
mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity
0 mdisk0 online 0 mdiskgrp0 1.3TB
16 Distributed_array online 1 mdiskgrp1 2.2TB
IBM_Storwize:ITSOV7K:superuser>lsarray 16
mdisk_id 16
mdisk_name Distributed_array
status online
mode array
mdisk_grp_id 1
mdisk_grp_name mdiskgrp1

Chapter 5. Storage pools 263


capacity 2.2TB
<...>

5.3 Working with external controllers and MDisks


Controllers are external storage systems that provide storage resources that are used as
MDisks. The system supports external storage controllers that are attached through internet
Small Computer Systems Interface (iSCSI) and through Fibre Channel (FC).

A key feature of the system is its ability to consolidate disk controllers from various vendors
into storage pools. The storage administrator can manage and provision storage to
applications from a single user interface and use a common set of advanced functions across
all of the storage systems under the control of the system.

This concept is called External Virtualization, which makes your storage environment more
flexible, cost-effective, and easy to manage. External Virtualization is a licensed function.

For more information about how to configure external storage systems, see 2.8, “Back-end
storage configuration” on page 90.

5.3.1 External storage controllers


External storage controllers can be attached by using FC and iSCSI. The following sections
describe how to attach external storage controllers to the system and how to manage them by
using the GUI.

System layers
A system layer affects how the system interacts with other external IBM Storwize or
IBM FlashSystem family systems. A system is in either the storage layer (default) or the
replication layer.

In the storage layer, the system can provide external storage for a replication-layer system,
but it cannot use another Storwize or IBM FlashSystem family system that is configured with
the storage-layer external storage.

In the replication layer, the system cannot provide external storage for a replication-layer
system, but the system can use another Storwize or IBM FlashSystem family system that is
configured with storage-layer external storage.

You get a warning that your system is in the storage layer if you try to add an external iSCSI
storage controller by using the GUI. You are prompted to convert the system to the replication
layer automatically.

Note: Before you change the system layer, the following conditions must be met:
򐂰 No host object can be configured with worldwide port names (WWPNs) from a Storwize
or IBM FlashSystem family system.
򐂰 No system partnerships can be defined.
򐂰 No Storwize or IBM FlashSystem family system can be visible on the storage area
network (SAN) fabric.

264 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To switch the system layer, you can also run the chsystem CLI command, as shown in
Example 5-25. If the command runs successfully, it returns no output.

Example 5-25 Changing the system layer


IBM_Storwize:ITSOV7K:superuser>lssystem | grep layer
layer storage
IBM_Storwize:ITSOV7K:superuser>chsystem -layer replication
IBM_Storwize:ITSOV7K:superuser>

For more information about layers and how to change them, see Product overview →
Technical overview → System layers in IBM Knowledge Center.

Attachment by using Fibre Channel


A controller that is connected through FC is detected automatically by the system if the
cabling, zoning, and system layer are configured correctly. For more information about how to
attach and zone back-end storage controllers to the system, see 2.6, “Fibre Channel SAN
configuration planning” on page 81.

If the external controller is not detected, ensure that the system is cabled and zoned into the
same SAN as the external storage system. Check that layers are set correctly on both
virtualizing and virtualized systems if they belong to the IBM Storwize or IBM FlashSystem
family.

After the problem is corrected, rescan the FC network immediately by selecting Pools →
External Storage, and then selecting Actions → Discover Storage, as shown in
Figure 5-53.

Figure 5-53 Discover Storage menu

This action runs the detectmdisk command. It returns no output. Although it might appear
that the command completed, some extra time might be required for it to run. The command
is asynchronous and returns a prompt while the command continues to run in the
background.

Attachment by using iSCSI


Unlike a Fibre Channel connection (FICON), you must manually configure iSCSI connections
between the system and the external storage controller. Until you do this task, the controller is
not listed in the External Storage pane. For more information about how to attach back-end
storage controllers to the system, see Chapter 2, “Planning” on page 77.

To start virtualizing an iSCSI back-end controller, you must follow the documentation in IBM
Knowledge Center to perform configuration steps that are specific to your back-end storage
controller. You can see find the steps by selecting Configuring → Configuring and
servicing storage systems → External storage system configuration with iSCSI
connections.

For more information about configuring the system to virtualize a back-end storage controller
with iSCSI, see iSCSI Implementation and Best Practices on IBM Storwize Storage Systems,
SG24-8327.

Chapter 5. Storage pools 265


Managing external storage controllers
You can manage both FC and iSCSI storage controllers through the External Storage pane.
To access the External Storage pane, select Pools → External Storage, as shown in
Figure 5-54.

Figure 5-54 External Storage pane

Note: A controller that is connected through FC is detected automatically by the system.


The cabling, the zoning, and the system layer must be configured correctly. A controller
that is connected through iSCSI must be added to the system manually.

Depending on the type of back-end system, it might be detected as one or more controller
objects.

If the External Storage pane does not appear in the Pools windows, the virtualization licenses
are not configured. To use the system’s virtualization functions, you must order the correct
External Virtualization licenses. You can configure the licenses by selecting Settings →
System → Licensed Functions. For assistance with licensing questions or to purchase any
of these licenses, contact your IBM account team or IBM Business Partner.

The External Storage pane lists the external controllers that are connected to the system and
all the external MDisks that are detected by the system. The MDisks are organized by the
external storage system that presents them. Toggle the sign to the left of the controller icon to
show or hide the MDisks that are associated with the controller.

If you configured logical unit names on your external storage systems, it is not possible for the
system to determine these names because they are local to the external storage system.
However, you can use the LU unique identifiers (UIDs), or external storage system worldwide
node names (WWNNs) and LU number to identify each device.

To list all visible external storage controllers with CLI, run the lscontroller command, as
shown in Example 5-26.

Example 5-26 Listing controllers by using the CLI (some columns are not shown)
IBM_Storwize:ITSOV7K:superuser>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
0 controller0 2076 IBM 2145
1 controller1 2076 IBM 2145
2 controller2 2076 IBM 2145
<...>

266 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5.3.2 Actions for external storage controllers
You can perform many actions on external storage controllers. Some actions are available for
external iSCSI controllers only.

To select any action, select Pools → External Storage and right-click the controller, as
shown in Figure 5-55. Alternatively, select the controller and click Actions.

Figure 5-55 Actions for external storage controllers

Discover Storage
When you create or remove LUs on an external storage system, the change might not be
detected immediately. In this case, click Discover Storage so that the system can rescan the
FC or iSCSI network. In general, the system automatically detects disks when they appear on
the network. However, some FC controllers do not send the required SCSI primitives that are
necessary to automatically discover the new disks.

The rescan process discovers any new MDisks that were added to the system and
rebalances MDisk access across the available ports. It also detects any loss of availability of
the controller ports.

This action runs the detectmdisk command.

Rename
To modify the name of an external controller to simplify administration tasks, click Rename.
The naming rules are the same as for storage pools, and they can be found in 5.1.1, “Creating
storage pools” on page 224.

To rename a storage controller by using the CLI, run the chcontroller command.
Example 5-27 on page 268 shows a use case.

Removing iSCSI sessions


This action is available only for external controllers that are attached with iSCSI. To remove
the iSCSI session that is established between the source and target port, right-click the
session and select Remove.

For more information about the CLI commands and detailed instructions, see iSCSI
Implementation and Best Practices on IBM Storwize Storage Systems, SG24-8327.

Chapter 5. Storage pools 267


Modifying a site
This action is available only for systems that use IBM HyperSwap or are in a topology. To
change the site with which the external controller is associated, select Modify Site, as shown
in Figure 5-56.

Figure 5-56 Modifying the site of an external controller

To change the controller site assignment by using the CLI, run the chcontroller command.
Example 5-27 shows that controller0 was renamed and reassigned to a different site.

Example 5-27 Changing a controller’s name and site


IBM_Storwize:ITSOV7K:superuser>chcontroller -name site3_controller -site site3 controller0
IBM_Storwize:ITSOV7K:superuser>

5.3.3 Working with external MDisks


After an external back-end storage controller is configured, attached to the system, and
detected as a controller, you can work with LUs that are provisioned from it. Each LU is
represented by an MDisk object.

External MDisks can have one of the following modes:


򐂰 Unmanaged
External MDisks are initially discovered by the system as unmanaged MDisks. An
unmanaged MDisk is not a member of any storage pool. It is not associated with any
volumes, and has no metadata that is stored on it. The system does not write to an MDisk
that is in unmanaged mode except when it attempts to change the mode of the MDisk to
one of the other modes. Removing an external MDisk from a pool returns it to unmanaged
mode.
򐂰 Managed
When unmanaged MDisks are added to storage pools, they become managed. Managed
mode MDisks are always members of a storage pool, and their extents contribute to the
storage pool. This mode is the most common and normal mode for an MDisk.
򐂰 Image
Image mode provides a direct block-for-block conversion from the MDisk to a volume. This
mode is provided to satisfy the following major usage scenarios:
– Presenting existing data on an MDisk through the system to an attached host
– Importing existing data on an MDisk into the system
– Exporting data on a volume by performing a migration to an image mode MDisk

268 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Listing external MDisks
You can manage external MDisks by using the External Storage pane, which is accessed by
selecting Pools → External Storage, as shown in Figure 5-57.

Figure 5-57 External Storage pane

To list all MDisks that are visible by the system by using the CLI, run the lsmdisk command
without any parameters. If required, you can filter output to include only external or only array
type MDisks.

Assigning MDisks to pools


You can add Unmanaged MDisks to an existing pool or create a pool to include them. If no
storage pool exists yet, follow the procedure that is outlined in 5.1.1, “Creating storage pools”
on page 224.

Figure 5-58 shows how to add selected MDisk to an existing storage pool. Click Assign
under the Actions menu or right-click the MDisk and select Assign.

Figure 5-58 Assigning an unmanaged MDisk

After you click Assign, a dialog box opens, as shown in Figure 5-59. Select the target pool,
MDisk storage tier, and external encryption setting.

Figure 5-59 Assign MDisk dialog box

Chapter 5. Storage pools 269


When you add MDisks to pools, you must assign them to the correct storage tiers. It is
important to set the tiers correctly if you plan to use the Easy Tier feature. Using an incorrect
tier can mean that the Easy Tier algorithm might make wrong decisions and thus affect
system performance. For more information about storage tiers, see Chapter 9, “Advanced
features for storage efficiency” on page 463.

The storage tier setting can also be changed after the MDisk is assigned to the pool.

Select the Externally encrypted check box if your back-end storage performs data
encryption. For more information about encryption, see Chapter 12, “Encryption” on
page 701.

After the task completes, click Close.

Note: If the external storage LUs that are presented to the system contain data that must
be retained, do not use the Assign option to add the MDisks to a pool. This option
destroys the data on the LU. Instead, use the Import option to create an image mode
MDisk. For more information, see Chapter 8, “Storage migration” on page 443.

To see the external MDisks that are assigned to a pool within the system, select Pools →
MDisks by Pools.

When a new MDisk is added to a pool that already contains MDisks and volumes, the Easy
Tier feature automatically balances volume extents between the MDisks in the pool as a
background process. The goal of this process is to distribute extents in a way that provides
the best performance to the volumes. It does not attempt to balance the amount of data
evenly between all MDisks.

The data migration decisions that Easy Tier makes between tiers of storage (inter-tier) or
within a single tier (intra-tier) are based on the I/O activity that is measured. Therefore, when
you add an MDisk to a pool, extent migrations are not necessarily performed immediately. No
migration of extents occurs until there is sufficient I/O activity to trigger it.

If Easy Tier is turned off, no extent migration is performed. Only newly allocated extents are
written to a new MDisk.

For more information about the Easy Tier feature, see Chapter 9, “Advanced features for
storage efficiency” on page 463.

To assign an external MDisk to a storage pool by using the CLI, run the addmdisk command.
You must specify the MDisk name or ID, MDisk tier, and target storage pool, as shown in
Example 5-28. The command returns no feedback.

Example 5-28 The addmdisk command


IBM_Storwize:ITSOV7K:superuser>addmdisk -mdisk mdisk3 -tier enterprise Pool0
IBM_Storwize:ITSOV7K:superuser>

5.3.4 Actions for external MDisks


External MDisks support specific actions that are not supported on RAID arrays that are
made from internal storage. Some actions are supported only on unmanaged external
MDisks, and some are supported only on managed external MDisks.

270 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To choose an action, select Pools → External Storage or Pools → MDisks by Pools, select
the external MDisk, and click Actions, as shown in Figure 5-60. Alternatively, right-click the
external MDisk.

Figure 5-60 Actions for MDisks

Discover Storage
This option is available even if no MDisks are selected. By running it, you cause the system to
rescan the iSCSI and FC network for these purposes:
򐂰 Find any new MDisks that might have been added.
򐂰 Rebalance MDisk access across all available controller device ports.

This action runs the detectmdisk command.

Assign
This action is available only for unmanaged MDisks. Select Assign to open the dialog box
that is explained in “Assigning MDisks to pools” on page 269.

Modify Tier
To modify the tier to which the external MDisk is assigned, select Modify Tier, as shown in
Figure 5-61. This setting is adjustable because the system cannot always detect the tiers that
are associated with external storage automatically, unlike with internal arrays.

Figure 5-61 Modifying an external MDisk tier

For more information about storage tiers and their importance, see Chapter 9, “Advanced
features for storage efficiency” on page 463.

Chapter 5. Storage pools 271


To change the external MDisk storage tier, run the chmdisk command. Example 5-29 shows
setting the new tier to mdisk2. No feedback is returned.

Example 5-29 Changing the tier setting by using the CLI


IBM_Storwize:ITSOV7K:superuser>chmdisk -tier tier1_flash mdisk2
IBM_Storwize:ITSOV7K:superuser>

Modify Encryption
To modify the encryption setting for the MDisk, select Modify Encryption. This option is
available only when encryption is enabled.

If the external MDisk is already encrypted by the external storage system, change the
encryption state of the MDisk to Externally encrypted. This setting stops the system from
encrypting the MDisk again if the MDisk is part of an encrypted storage pool.

For more information about encryption, encrypted storage pools, and self-encrypting MDisks,
see Chapter 12, “Encryption” on page 701.

To perform this task by using the CLI, run the chmdisk command, as shown in Example 5-30.

Example 5-30 Using chmdisk to modify the encryption


IBM_Storwize:ITSOV7K:superuser>chmdisk -encrypt yes mdisk5
IBM_Storwize:ITSOV7K:superuser>

Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk enables
you to preserve the existing data on the MDisk. You can migrate the data to a new volume or
keep the data on the external system.

MDisks are imported for storage migration. The system provides a migration wizard to help
with this process, which is described in Chapter 8, “Storage migration” on page 443.

Note: The wizard is the preferred method to migrate data from legacy storage to the
system. When an MDisk is imported, the data on the original LU is not modified. The
system acts as a pass-through, and the extents of the imported MDisk do not contribute to
storage pools.

To choose one of the following migration methods, select Import:


򐂰 Import to temporary pool as image mode volume
This method does not migrate data from the source MDisk. It creates an image mode
volume that has a direct block-for-block conversion of the MDisk. The existing data is
preserved on the external storage controller, but it is also accessible from the system.
In this method, the image mode volume is created in a temporary migration pool and is
presented through the system. Choose the extent size of the temporary pool and click
Import, as shown in Figure 5-62 on page 273.

272 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 5-62 Importing an unmanaged MDisk

The MDisk is imported and listed as an image mode MDisk in the temporary migration
pool, as shown in Figure 5-63.

Figure 5-63 Image mode imported MDisk

A corresponding image mode volume is now available in the same migration pool, as
shown in Figure 5-64.

Figure 5-64 Image mode volume

The image mode volume can then be mapped to the original host. The data is still
physically present on the MDisk of the original external storage controller and no
automatic migration process is running. The original host sees no difference and its
applications can continue to run. The image mode volume is now under the control of the
system and it can optionally be migrated to another storage pool or be converted from
image mode to a striped virtualized volume. You can use the Volume Migration wizard or
perform the tasks manually.

Chapter 5. Storage pools 273


򐂰 Migrate to an existing pool
This method starts by creating an image mode volume like the first method. However, it
then automatically migrates the image mode volume to a virtualized volume in the
selected storage pool. Free extents must be available in the selected target pool so that
the data can be copied there.
If this method is selected, choose the storage pool to hold the new volume and click
Import, as shown in Figure 5-65.

Figure 5-65 Migrating an MDisk to an existing pool

The data migration begins automatically after the MDisk is imported successfully as an
image mode volume. You can check the migration progress by clicking the task under
Running Tasks, as shown in Figure 5-66.

Figure 5-66 MDisk migration in the Running Tasks pane

After the migration completes, the volume is available in the chosen destination pool. This
volume is no longer an image mode volume. It is now virtualized by the system.
All data is migrated off the source MDisk, and the MDisk has switched its mode, as shown
in Figure 5-67.

Figure 5-67 Imported MDisks appear as Managed

The MDisk can be removed from the migration pool. It returns to the list of external MDisks
as Unmanaged. The MDisk can now be used as a regular managed MDisk in a storage pool,
or it can be decommissioned.

274 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Alternatively, importing and migrating external MDisks to another pool can be done by
selecting Pools → System Migration to start the system migration wizard. For more
information, see Chapter 8, “Storage migration” on page 443.

Include
The system can exclude an MDisk from its storage pool if it has multiple I/O failures or has
persistent connection errors. Exclusion ensures that there is no excessive error recovery that
might impact other parts of the system. If an MDisk is automatically excluded, run the DMP to
resolve any connection and I/O failure errors.

If no error event is associated with the MDisk in the log and the external problem is corrected,
click Include to add the excluded MDisk back to the storage pool.

The includemdisk command performs the same task. The command needs the MDisk name
or ID to be provided as a parameter, as shown in Example 5-31.

Example 5-31 Including a degraded MDisk by using the CLI


IBM_Storwize:ITSOV7K:superuser>includemdisk mdisk3
IBM_Storwize:ITSOV7K:superuser>

Remove
In some cases, you might want to remove external MDisks from their storage pool. To remove
the MDisk from the storage pool, click Remove. After the MDisk is removed, it goes back to
the Unmanaged state. If there are no volumes in the storage pool to which this MDisk is
allocated, the command runs immediately without additional confirmation. If there are
volumes in the pool, you are prompted to confirm the action, as shown in Figure 5-68.

Figure 5-68 Removing an MDisk from a pool

Confirming the action starts a migration of volumes to extents on that MDisk to other MDisks
in the pool. During this background process, the MDisk remains a part of the storage pool.
Only when the migration completes is the MDisk removed from the storage pool and returns
to Unmanaged mode.

Ensure that you have enough available capacity remaining in the storage pool to allocate the
data being migrated from the removed MDisk, or this command fails.

Important: The MDisk that you are removing must remain accessible to the system while
all data is copied to other MDisks in the same storage pool. If the MDisk is unmapped
before the migration finishes, all volumes in the storage pool go offline and remain in this
state until the removed MDisk is connected again.

Chapter 5. Storage pools 275


To remove an MDisk from a storage pool by using the CLI, run the rmmdisk command. You
must use the -force parameter if you must migrate volume extents to other MDisks in a
storage pool.

The command fails if you do not have enough available capacity remaining in the storage pool
to allocate the data that you are migrating from the removed array.

Dependent Volumes
A volume depends on an MDisk if the MDisk becoming unavailable results in a loss of access
or a loss of data for that volume. Use this option before you do maintenance operations to
confirm which volumes (if any) are affected. Selecting an MDisk and clicking Dependent
Volumes lists the volumes that depend on that MDisk.

You can get the same information by running the lsdependentvdisks command.

View Provisioning Groups


Provisioning groups are used for capacity reporting and monitoring of overprovisioned
external storage controllers. Each overprovisioned MDisk is part of a provisioning group that
defines the physical storage resources that are available to a set of MDisks. Storage
controllers report the usable capacity of an overprovisioned MDisk based on its provisioning
group. If multiple MDisks are part of the same provisioning group, then these MDisks share
the physical storage resources and report the same usable capacity. However, this usable
capacity is not available to each MDisk individually because it is shared among all these
MDisks.

To know the usable capacity that is available to the system or to a pool when overprovisioned
storage is used, you must account for the usable capacity of each provisioning group. To
show a summary of overprovisioned external storage, including controllers, MDisks, and
provisioning groups, click View Provisioning Groups, as shown in Figure 5-69.

For more information, see 9.6, “Overprovisioning and data reduction on external storage” on
page 500.

Figure 5-69 View Provisioning Groups

276 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6

Chapter 6. Volumes
In IBM Spectrum Virtualize, a volume is storage space that is provisioned out of a storage
pool and presented to a host as a Small Computer System Interface (SCSI) logical unit (LU);
that is, a logical disk.

This chapter describes how to create and provision volumes on IBM Spectrum Virtualize
systems. The first part of this chapter provides a brief overview of IBM Spectrum Virtualize
volumes, the classes of volumes that are available, and the available volume customization
options.

The second part of this chapter describes how to create, modify, and map volumes by using
the GUI.

The third part of this chapter provides an introduction to volume manipulation from the
command-line interface (CLI).

This chapter includes the following topics:


򐂰 6.1, “Introduction to volumes” on page 278
򐂰 6.2, “Volume characteristics” on page 278
򐂰 6.3, “Virtual volumes” on page 295
򐂰 6.4, “Volumes in multi-site topologies” on page 296
򐂰 6.5, “Operations on volumes” on page 298
򐂰 6.6, “Volume operations by using the CLI” on page 341

© Copyright IBM Corp. 2020. All rights reserved. 277


6.1 Introduction to volumes
A volume is a logical disk that the system presents to attached hosts. For the attached host, a
volume is a SCSI target that uses a 512-byte block size.

For an IBM Spectrum Virtualize system cluster, the volume that is presented to a host is
internally represented as a virtual disk (VDisk). A VDisk is an area of usable storage that was
allocated out of a pool of storage that is managed by an IBM Spectrum Virtualize cluster. The
term virtual is used because the volume that is presented does not necessarily exist on a
single physical entity.

Note: Volumes are composed of extents that are allocated from a storage pool. Storage
pools group managed disks (MDisks), which are Redundant Arrays of Independent Disks
(RAIDs) from internal storage, or LUs that are presented to and virtualized by an
IBM Spectrum Virtualize system. Each MDisk is divided into sequentially numbered
extents (zero-based indexing). The extent size is a property of a storage pool, and is used
for all MDisks that make up the storage pool.

MDisks are internal objects that are used for storage management. They are not directly
visible to or used by host systems.

A volume is presented to hosts by an I/O group, and within that group is a preferred node, that
is, a node that by default serves I/O requests to that volume. When a host requests an I/O
operation to a volume, the multipath driver on the host identifies the preferred node for the
volume and by default uses only paths to this node to for I/O requests.

6.2 Volume characteristics


There are several parameters that characterize each volume. They should be set correctly to
match the requirements of the storage user (an application running on a host). These
characteristics are:
򐂰 Size
򐂰 Performance (input/output operations per second (IOPS), response time, and bandwidth)
򐂰 Resiliency
򐂰 Storage efficiency
򐂰 Security (data-at-rest encryption)
򐂰 Extent allocation policy
򐂰 Management mode

Additionally, volumes can be configured as VMware vSphere Virtual Volumes (VVOLs).

VVOLs change the approach to VMware virtual machines (VMs) disk configuration. Instead of
files on a VMware Virtual Machine File System (VMFS) distributed file system that is created
on a single large volume, VVOLs introduce one-to-one mapping between VM disks and
storage volumes. VVOLs can be managed by the VMware infrastructure so that the storage
system administrator can delegate VM disk management to VMware infrastructure
specialists.

To provide storage users adequate service, all parameters must be correctly set. Importantly,
the various parameters may be interdependent, that is, setting one of them might affect other
aspects of the volume. The volume parameters and their interdependencies are covered in
the following topics.

278 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6.2.1 Volume type
The type attribute of a volume defines the method of allocation of extents that make up the
volume copy:
򐂰 A striped volume contains a volume copy that has extents that are allocated from multiple
MDisks from the storage pool. By default, extents are allocated by using a round-robin
algorithm from all MDisks in the storage pool. However, it is possible to supply a list of
MDisks to use for volume creation.

Attention: By default, striped volume copies are striped across all MDisks in the
storage pool. If some of the MDisks are smaller than others, the extents on the smaller
MDisks are used up before the larger MDisks run out of extents. Manually specifying
the stripe set in this case might result in the volume copy not being created.

If you are unsure if sufficient free space is available to create a striped volume copy,
select one of the following options:
򐂰 Check the free space on each MDisk in the storage pool by running the
lsfreeextents command.
򐂰 Allow the system to automatically create the volume copy by not supplying a specific
stripe set.

Figure 6-1 Striped extent allocation

򐂰 A sequential volume contains a volume copy with extents that are allocated sequentially
on one MDisk.
򐂰 An image mode volume is a special type of volume that has a direct one-to-one mapping to
one (image mode) MDisk.

For striped volumes, the extents are allocated from the set of MDisks (by default, all MDisks in
the storage pool):
򐂰 An MDisk is picked by using a pseudo-random algorithm and an extent is allocated from
this MDisk. This approach minimizes the probability of triggering the striping effect, which
might lead to poor performance for workloads that generate many metadata I/Os, or that
create multiple sequential streams.
򐂰 All subsequent extents (if required) are allocated from the MDisk set by using a
round-robin algorithm.
򐂰 If an MDisk has no free extents when its turn arrives, the algorithm moves to the next
MDisk in the set that has a free extent.

Chapter 6. Volumes 279


Note: The striping effect occurs when multiple logical volumes that are defined on a set of
physical storage devices (MDisks) store their metadata or file system transaction log on the
same physical device (MDisk).

Because of the way the file systems work, many I/Os to file system metadata disk regions
exist. For example, for a journaling file system, a write to a file might require two or more
writes to the file system journal: at minimum, one to make a note of the intended file
system update, and one marking successful completion of the file write.

If multiple volumes (each with its own file system) are defined on the same set of MDisks,
and all (or most) of them store their metadata on the same MDisk, a disproportionately
large I/O load is generated on this MDisk, which can result in suboptimal performance of
the storage system. Pseudo-randomly allocating the first MDisk for new volume extent
allocation minimizes the probability that multiple file systems that are created on these
volumes place their metadata regions on the same physical MDisk.

6.2.2 Managed mode and image mode


Volumes are configured within IBM Spectrum Virtualize by allocating a set of extents off one
or more managed mode MDisks in the storage pool. Extents are the smallest allocation unit at
the time of volume creation, so each MDisk extent maps to exactly one volume extent.

Note: An MDisk extent maps to exactly one volume extent. For volumes with two copies,
one volume extent maps to two MDisk extents (one for each volume copy).

Figure 6-2 shows this mapping. It also shows a volume that consists of several extents that
are shown as V0 - V7. Each of these extents is mapped to an extent on one of the MDisks: A,
B, or C. The mapping table stores the details of this indirection.

Figure 6-2 Simple view of block virtualization

Several of the MDisk extents are unused, that is, no volume extent maps to them. These
unused extents are available for use in creating volumes, migration, and expansion.

280 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The default and most common type of volumes in IBM Spectrum Virtualize are managed
mode volumes. Managed mode volumes are allocated from a set of MDisk belonging to a
storage pool, and they can be subjected to the full set of virtualization functions. In particular,
they offer full flexibility in mapping between logical volume representation (logical blocks) and
physical storage that used to store these blocks. This function requires that physical storage
(MDisks) is fully managed by IBM Spectrum Virtualize, which means that the LUs that are
presented to IBM Spectrum Virtualize by the back-end storage systems do not contain any
data when they are added to the storage pool.

Image mode volumes enable IBM Spectrum Virtualize to work with LUs that were previously
directly mapped to hosts, which is often required when IBM Spectrum Virtualize is introduced
into a storage environment and image mode volumes are used to enable seamless migration
of data and a smooth transition to virtualized storage.

The image mode creates one-to-one mapping of logical block addresses (LBAs) between a
volume and a single MDisk (a LU that is presented by the virtualized storage). Image mode
volumes have a minimum size of one block (512 bytes) and always occupy at least one
extent. An image mode MDisk cannot be used as a quorum disk and no IBM Spectrum
Virtualize system metadata extents are allocated from it. All the IBM Spectrum Virtualize copy
services functions can be applied to image mode disks. The difference between a managed
mode volume (with striped extent allocation) and an image mode volume is shown in
Figure 6-3.

Figure 6-3 An image mode volume versus a striped volume

An image mode volume is mapped to only one image mode MDisk, and it is mapped to the
entirety of this MDisk. Therefore, the image mode volume capacity is equal to the size of the
corresponding image mode MDisk. If the size of the (image mode) MDisk is not a multiple of
the MDisk group’s extent size, the last extent is marked as partial (not filled).

When you create an image mode volume, you map it to an MDisk that must be in unmanaged
mode and must not be a member of a storage pool. As the image mode volume is configured,
the MDisk is made a member of the specified storage pool. It is a best practice to use a
dedicated pool for image mode MDisks with a name indicating its role, such as Storage
Pool_IMG_xxx.

Chapter 6. Volumes 281


IBM Spectrum Virtualize also supports the reverse process in which a managed mode
volume can be migrated to an image mode volume. The extent size that is chosen for this
specific storage pool must be the same as the extent size of the storage pool into which you
plan to migrate the data off the image mode volumes. If a volume is migrated to another
MDisk, it is represented as being in managed mode during the migration. Its mode changes to
“image” only after the process completes.

Volumes have the following characteristics or attributes:


򐂰 Volumes can be configured as managed or image mode.
򐂰 Volumes can have extents that are allocated in striped or sequential mode.
򐂰 Volumes can be created as standard-provisioned or thin-provisioned. A conversion from a
standard-provisioned to a thin-provisioned volume and vice versa can be done at run time.
򐂰 Volumes can be configured to have single data copy, mirrored make them resistant to disk
subsystem failures, or improve the read performance.
򐂰 Volumes can be compressed.
򐂰 Volumes can be configured as VVOLs, which allows VMware vCenter to manage storage
objects. The storage system administrator can create these objects and assign ownership
to VMware administrators to allow them to manage these objects.
Volumes have two major modes: Managed mode and image mode. For managed mode
volumes, the sequential policy and the striped policy define how the extents of the volume
are allocated from the storage pool.
򐂰 Volumes can be replicated synchronously or asynchronously. An IBM Spectrum Virtualize
system can maintain active volume mirrors with a maximum of three other IBM Spectrum
Virtualize systems, but not from the same source volume.

6.2.3 Size
Each volume has two associated values that describe its size: real capacity and virtual
capacity.
򐂰 The real (physical) capacity is the size of storage space that is allocated to the volume
from the storage pool. It determines how many MDisk extents are allocated to form the
volume.
򐂰 The virtual capacity is capacity that is reported to the host and IBM Spectrum Virtualize
components (for example, IBM FlashCopy, cache, and Remote Copy (RC)).

In a standard-provisioned volume, these two values are the same. In a thin-provisioned


volume, the real capacity can be as little as a few percent of virtual capacity. The real capacity
is used to store the user data, and in the case of thin-provisioned volumes, the metadata of
the volume. The real capacity can be specified as an absolute value or as a percentage of the
virtual capacity. The volume size can be specified in units down to 512-byte blocks (see
Figure 6-4 on page 283).

282 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-4 Smallest possible volume size

A volume is composed of storage pool extents, so it is not possible to allocate less than one
extent to create a volume. Effectively, the internal unit of volume size is the extent size of the
pool (or pools) in which the volume is created.

For example, a basic volume of 512 bytes that is created in a pool with the default extent size
(1024 mebibytes (MiB)) uses 1024 MiB of the pool space because a whole extent must be
allocated to provide the space for the volume.

In practice, this rounding up of volume size to the whole number of extents has little impact on
storage use efficiency unless the storage system serves many small volumes. For more
information about storage pools and extents, see Chapter 5, “Storage pools” on page 221.

6.2.4 Performance
The basic metrics of volume performance are the number of IOPS the volume can provide,
the typical time to service an IO request, and the bandwidth of the data that is served to a
host.

Volume performance is defined by the pool or pools that are used to create the volume. The
pool determines the media bus (Non-Volatile Memory Express (NVMe) or Serial Advanced
Technology Attachment (SATA)), media type (IBM FlashCore Module (FCM) drives,
solid-state drives (SSDs), or hard disk drives (HDDs)), redundant array of independent disks
(RAID) level, and number of drives per RAID array, and possibility for the Easy Tier function to
optimize the performance of a volume. Additionally, storage efficiency, security, and allocation
policy configuration settings of a module might affect these characteristics.

Chapter 6. Volumes 283


6.2.5 Volume copies
A volume can have one or two physical copies. Although two copies are not required, the two
copies of a mirrored volume are allocated from different storage pools that are backed by
different physical hardware to increase volume resiliency. Each copy of the volume has the
same virtual capacity, but the two copies can have different characteristics, including different
real capacity. However, each volume copy is not a separate object and can be manipulated
only in the context of the volume. A mirrored volume behaves in the same way as any other
volume, such as:
򐂰 All its copies are expanded or shrunk when the volume is resized.
򐂰 It can participate in FlashCopy and RC relationships.
򐂰 It is serviced by an I/O group.
򐂰 It has a preferred node.

Volume copies are identified in the GUI by a copy ID, which can have value 0 or 1. Copies of
the volume can be split, which provides a point-in-time (PiT) copy of a volume. An overview of
volume mirroring is shown in Figure 6-5.

Figure 6-5 Volume mirroring overview

A copy can be added to a volume with a single copy or removed from a volume with two
copies. Internal safety mechanisms prevent accidental removal of the only remaining copy of
a volume.

A newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”, and the volume is immediately available for use.

The synchronization process updates the secondary copy until it is fully synchronized, that is,
data that is stored on the secondary copy matches the data that is on the primary copy. This
update is done at the synchronization rate that is defined when the volume is created, and it
can be modified after volume creation. The synchronization status for mirrored volumes is
recorded on the storage system quorum disk.

284 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If a mirrored volume is created by using the format parameter, both copies are formatted in
parallel. The volume comes online when both operations are complete with the copies in
sync.

If it is known that MDisk space (which is used for creating volume copies) is formatted or if the
user does not require read stability, a no synchronization option can be used that declares
the copies as synchronized even when they are not.

Creating more volume copies is beneficial in multiple scenarios. For example:


򐂰 Improving volume resilience by protecting them from a single storage system failure, which
requires that each volume copy be configured on a different storage system.
򐂰 Providing concurrent maintenance of a storage system that does not natively support
concurrent maintenance (for volumes on external virtualized storage).
򐂰 Providing an alternative method of data migration with improved availability
characteristics. While a volume is being migrated by using the data migration feature, it is
vulnerable to failures on both the source and target storage pool. Volume mirroring
provides an alternative migration method that is not affected by the destination volume
pool availability.
For more information about this volume migration method, see “Volume migration by
adding a volume copy” on page 338.

Note: When migrating volumes to a Data Reduction Pool (DRP), volume mirroring is
the only migration method because DRPs do not support migrate commands.

򐂰 Converting between standard-provisioned volumes and thin-provisioned volumes (in


either direction).

If one of the mirrored volume copies is temporarily unavailable (for example, because the
storage system that provides the pool is unavailable), the volume remains accessible to
servers. The storage system remembers which areas of the volume were modified after loss
of access to a volume copy and resynchronizes only these areas when both copies are
available.

Note: Volume mirroring is not a disaster recovery (DR) solution because both copies are
accessed by the same node pair and addressable by only a single cluster. However, if
correctly planned, it can improve availability.

The storage system tracks the synchronization status of volume copies by dividing the volume
into 256 kibibyte (KiB) grains and maintaining a bitmap of stale grains (on the quorum disk),
mapping 1 bit to one grain of the volume space. If the mirrored volume needs
resynchronization, the system copies to the out-of-sync volume copy only these grains that
were written to (changed) since the synchronization was lost. This approach is known as an
incremental synchronization, and it minimizes the time that is required to synchronize the
volume copies.

Important: Mirrored volumes can be taken offline if no quorum disk is available. This
behavior occurs because the synchronization status of mirrored volumes is recorded on
the quorum disk.

Chapter 6. Volumes 285


A volume with more than one copy can be checked to see whether all of the copies are
identical or consistent. If a medium error is encountered while it is reading from one copy, a
check is repaired by using data from the other copy. This consistency check is performed
asynchronously with host I/O.

Because mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, 1 MiB of
bitmap space supports up to 2 TiB of mirrored volumes. The default size of the bitmap space
is 20 MiB, which allows a configuration of up to 40 TiB of mirrored volumes. If all 512 MiB of
variable bitmap space is allocated to mirrored volumes, 1 PiB of mirrored volumes can be
supported.

Table 6-1 lists the bitmap space configuration options.

Table 6-1 Bitmap space default configuration


Copy service Minimum Default Maximum Minimuma capacity when
allocated allocated allocated using the default values
bitmap bitmap bitmap
space space space

RCb 0 20 MiB 512 MiB 40 TiB of remote mirroring


volume capacity

FlashCopyc 0 20 MiB 2 GiB 򐂰 10 TiB of FlashCopy


source volume capacity
򐂰 5 TiB of incremental
FlashCopy source volume
capacity

Volume 0 20 MiB 512 MiB 40 TiB of mirrored volumes


mirroring

RAID 0 40 MiB 512 MiB 򐂰 80 TiB array capacity by


using RAID 0, 1, or 10
򐂰 80 TiB array capacity in
three-disk RAID 5 array
򐂰 Slightly less than 120 TiB
array capacity in five-disk
RAID 6 array
a. The actual amount of available capacity might increase based on the settings, such as grain
size and strip size. RAID is subject to a 15% margin of error.
b. RC includes Metro Mirror (MM), Global Mirror (GM), and active-active relationships.
c. FlashCopy includes the FlashCopy function, Global Mirror with Change Volumes (GMVC), and
active-active relationships.

The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.

286 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6.2.6 I/O operations data flow
Although a mirrored volume looks to its users the same as a volume with a single copy, some
differences exist in how I/O operations are performed internally for volumes with single or two
copies.

Read I/O operations data flow


If the volume is mirrored (that is, two copies of the volume exist), one copy is known as the
primary copy. If the primary is available and synchronized, reads from the volume are
directed to that copy. The user selects which copy is primary when the volume is created, but
they can change this setting at any time. In the management GUI, an asterisk indicates the
primary copy of the mirrored volume. Placing the primary copy on a high-performance
controller maximizes the read performance of the volume.

For non-mirrored volumes, only one volume copy exists, so no choice exists for the read
source, and all reads are directed to the single volume copy.

Write I/O operations data flow


For write I/O operations to a mirrored volume, the host sends the I/O request to the preferred
node, which is responsible for destaging the data from cache. Figure 6-6 shows the data flow
for this scenario.

Figure 6-6 Data flow for write I/O processing in a mirrored volume

Chapter 6. Volumes 287


As shown in Figure 6-6 on page 287, the writes are sent by the host to the preferred node for
the volume (1). Then, the data is mirrored to the cache of the partner node in the I/O group
(2), and acknowledgment of the write operation is sent to the host (3). The preferred node
then destages the written data to all volume copies (4). The example that is shown in
Figure 6-7 shows a case with destaging to a mirrored volume, that is, one with two physical
data copies.

With Version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is written once, and then it is directly destaged from
the controller to the disk system.

Figure 6-7 shows the data flow in a stretched environment.

Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA

Reply with location


Data is replicated
Mirror once across ISL.
Copy 1 Copy 2
Preferred Non preferred Copy 1 Non preferred
Copy 2
LCA1 LCA1 Preferred
LCA 2 LCA 2
Destage Token write data message with location Destage

Storage at Site 1 Storage at Site 2


Figure 6-7 Design of an enhanced stretched cluster

6.2.7 Storage efficiency


With IBM Spectrum Virtualize, you can configure DRPs that provide several technologies that
increase the efficiency of physical storage use:
򐂰 Thin provisioning
򐂰 Deduplication (block-level and pattern-matching)
򐂰 Compression
򐂰 SCSI UNMAP support

Note: Storage efficiency options might require more licenses and hardware components
depending on the model and configuration of your storage system.

Implementation of DRPs requires careful planning and sizing. Before configuring the first
space-efficient volume on a storage system, see the relevant sections in Chapter 2,
“Planning” on page 77 and Chapter 9, “Advanced features for storage efficiency” on
page 463.

288 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
DRPs use multithreading and hardware acceleration (where available) to provide storage
efficiency functions on IBM Spectrum Virtualize storage systems. Using storage efficiency
options increases the number of IO operations the storage system must realize compared to
access to a basic volume because space-efficient volumes require the storage system to both
to write the data that is sent by the host and the metadata that is required to maintain a
space-efficient volume.

Note: FCM drives include compression hardware, so it provides data set size reduction
with no performance penalty.

For more information about the storage efficiency functions of IBM Spectrum Virtualize, see
Chapter 5, “Storage pools” on page 221 and Introduction and Implementation of Data
Reduction Pools and Deduplication, SG24-8430.

It is possible to benefit from both compression and data-at-rest encryption because


encryption is done after compression. However, the size of data that is encrypted at the host
level is unlikely to be reduced by either compression or deduplication at the storage system.

Standard and thin-provisioned volumes


A standard-provisioned volume directly maps logical blocks on the virtual volume to physical
blocks on storage media. Therefore, its virtual and physical capacities are identical.

A thin-provisioned volume has virtual capacity larger than physical capacity. Thin-provisioning
is the base technology for all space-efficient volumes. When a thin-provisioned volume is
created, a small amount of the real capacity is used for initial metadata. This metadata holds
a mapping of an LBA in the volume to a grain on a physically allocated extent.

When a write request comes from a host, the block address for which the write is requested is
checked against the mapping table. If a previous write to a block on the same grain exists as
the incoming request, then physical storage was allocated for this LBA and can be used to
service the request. Otherwise, a new physical grain is allocated to store the data, and the
mapping table is updated to record that allocation.

Note: If you use of thin-provisioned volumes, then it is recommended to monitor closely the
available space in the pool that contains these volumes. If a thin-provisioned volume does
not have enough real capacity for a write operation, the volume is taken offline and an error
is logged. There is limited ability to recover with UNMAP. Also, consider creating a fully
allocated sacrificial emergency space volume.

The grain size is defined when the volume is created and cannot be changed afterward. The
grain size can be 32 KiB, 64 KiB, 128 KiB, or 256 KiB. The default grain size is 256 KiB, which
is the preferred option. However, the following factors must be considered when deciding on
the grain size:
򐂰 A smaller grain size helps to save space. If a 16 KiB write I/O requires a new physical grain
to be allocated, the used space is 50% of a 32 KiB grain, but just over 6% of 256 KiB grain.
If no subsequent writes to other blocks of the grain occur, the volume provisioning is less
efficient for volumes with larger grain.
򐂰 A smaller grain size requires more metadata I/O to be performed, which increases the
load on the physical back-end storage systems.

Chapter 6. Volumes 289


򐂰 When a thin-provisioned volume is a FlashCopy source or target volume, specify the same
grain size for FlashCopy and the thin-provisioned volume configuration. Use 256 KiB grain
to maximize performance.
򐂰 The grain size affects the maximum size of the thin-provisioned volume. For 32 KiB size,
the volume size cannot exceed 260 TiB.

Figure 6-8 shows the thin-provisioning concept.

Figure 6-8 Conceptual diagram of a thin-provisioned volume

Thin-provisioned volumes use metadata to enable capacity savings, and each grain of user
data requires metadata to be stored. Therefore, the I/O rates that are obtained from
thin-provisioned volumes are lower than the I/O rates that are obtained from
standard-provisioned volumes.

The metadata storage that is used is never greater than 0.1% of the user data. The resource
usage is independent of the virtual capacity of the volume.

Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A


read I/O, which requests data from unallocated data space, returns zeros. When a write I/O
causes space to be allocated, the grain is “zeroed” before use. Additionally, when a
full-grain write consists of “all zeros”, no space is physically allocated on disk.

The real capacity of a thin-provisioned volume can be changed if the volume is not in image
mode. Thin-provisioned volumes use the grains of real capacity that is provided in ascending
order as new data is written to the volume. If the user initially assigns too much real capacity
to the volume, the real capacity can be reduced to free storage for other uses.

A thin-provisioned volume can be configured to autoexpand. This feature causes


IBM Spectrum Virtualize to automatically add a fixed amount of extra real capacity to the
thin-provisioned volume as required. Autoexpand does not cause the real capacity to grow
much beyond the virtual capacity. Instead, it attempts to maintain a fixed amount of unused
real capacity for the volume, which is known as the contingency capacity.

The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.

290 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A volume that is created without the autoexpand feature and has zero contingency capacity
goes offline when the real capacity is used, so it must expand.

To facilitate management of the auto expansion of thin-provisioned volumes, a capacity


warning should be set for the storage pools from which they are allocated. When the used
capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if
a warning of 80% is specified, an event is logged when 20% of the pool capacity remains free.

A thin-provisioned volume can be converted nondisruptively to a standard-provisioned


volume (or vice versa) by using the volume mirroring function. You can create a
thin-provisioned copy to a standard-provisioned primary volume and then remove the
standard-provisioned copy from the volume after they are synchronized.

The standard-provisioned to thin-provisioned migration procedure uses a zero-detection


algorithm so that grains that contain all zeros do not cause any real capacity to be used.

Thin-provisioned volumes can be used as volumes that are assigned to the host by
FlashCopy to implement thin-provisioned FlashCopy targets. When creating a mirrored
volume, a thin-provisioned volume can be created as a second volume copy, whether the
primary copy is a standard or thin-provisioned volume.

Deduplicated volumes
Deduplication is a specialized data set reduction technique. However, in contrast to the
standard file-compression tools that work on single files or sets of files, deduplication is a
technique that is applied on a larger scale, such as a file system or volume. In IBM Spectrum
Virtualize, deduplication can be enabled for thin-provisioned and compressed volumes that
are created in DRPs.

IBM Spectrum Virtualize uses two techniques to detect duplicate data:


򐂰 Pattern matching
򐂰 Data signature (hash)

Deduplication works by identifying repeating chunks in the data that is written to the storage
system. Pattern matching looks for a known data patterns (for example, “all ones”), and the
data signature-based algorithm calculates a signature for each data chunk (by using a hash
function) and checks whether the calculated signature is present in the deduplication
database.

If a known pattern or a signature match is found, the data chunk is replaced by a reference to
a stored chunk, which reduces storage space that is required for storing the data. Conversely,
if no match is found, the data chunk is stored without modification, and its signature is added
to the deduplication database.

To maximize the space that is available for the deduplication database, the system distributes
it between all nodes in the I/O groups that contain deduplicated volumes. Each node holds
a distinct portion of the records that are stored in the database. If nodes are removed or
added to the system, the database is redistributed between the anodes to ensure optimal use
of available resources.

Chapter 6. Volumes 291


Depending on the data type that is stored on the volume, the capacity savings can be
significant. Examples of use cases that typically benefit from deduplication are virtual
environments with multiple VMs running the same operating system (OS), and backup
servers. In both cases, it is expected, that multiple copies of identical files exist, such as
components of the standard OS or applications that are used in the organization. However,
data that is encrypted at the file system level might not benefit from deduplication because
encryption of the same file with different keys provides different output, making deduplication
impossible.

Although deduplication (as with other features of IBM Spectrum Virtualize) is transparent to
users and applications, it must be planned for and understood before implementation
because it might reduce the redundancy of a solution. For example, if an application stores
two copies of a file to reduce the chances of data corruption because of a random event, the
copies are deduplicated and the intended redundancy is removed from the system if these
copies are on the same volume.

When planning the use of deduplicated volumes, be aware of update and performance
considerations and the following software and hardware requirements:
򐂰 Code level V8.1.2 or higher is needed for DRPs.
򐂰 Code level V8.1.3 or higher is needed for deduplication.
򐂰 Nodes must have at least 32 GB to support deduplication. Nodes that have more than
64 GB can use a bigger deduplication fingerprint database, which might lead to better
deduplication.
򐂰 You must run supported hardware. For more information about the valid hardware and
features combinations, go to IBM Knowledge Center, select your system, and read the
“Planning for deduplicated volumes” section by expanding Planning → Storage
configuration planning.

Compressed volumes
A volume that is created in a DRP can be compressed. Data that is written to the volume is
compressed before committing it to back-end storage, which reduces the physical capacity
that is required to store the data. Because enabling compression does not incur an extra
metadata handling penalty, in most cases it is a best practice to enable compression on
thin-provisioned volumes.

Note: When a volume is backed by FCM drives that compress data at line speed, the
volume should be configured with compression that is switched on. IBM Spectrum
Virtualize is tightly integrated with the storage controller and uses knowledge of both the
logical and physical space.

Note: You can use the management GUI or the CLI to run the built-in compression
estimation tool. This tool can be used to determine the capacity savings that are possible
for existing data on the system by using compression.

Note: Another benefit of data compression for volumes that are backed by flash-based
storage is the reduction of write amplification, which has a beneficial effect on media
longevity.

Capacity reclamation
File deletion in modern file systems is realized by updating file system metadata and marking
the physical storage space that is used by the removed file as unused. The data of the
removed file is not overwritten, which improves file system performance by reducing the
number of I/O operations on physical storage that is required to perform file deletion.

292 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
However, this approach affects the management of the real capacity of volumes with enabled
capacity savings. File system deletion frees space at the file system level, but physical data
blocks that are allocated by the storage for the file take up the real capacity of a volume.

To address this issue, file systems added support for the SCSI UNMAP command, which can be
run after file deletion. It informs the storage system that physical blocks that are used by the
removed file should be marked as no longer in use so that they can be freed. Modern OSs run
SCSI UNMAP commands only to storage that advertises support for this feature.

Version 8.1.0 and later releases support the SCSI UNMAP command on IBM Spectrum
Virtualize systems, which enables hosts to notify the storage controller of capacity that is no
longer required and may be reused or de-allocated, which might improve capacity savings.

Note: For volumes that are outside DRPs, the complete stack from the OS down to
back-end storage controller must support UNMAP to enable the capacity reclamation.
SCSI UNMAP is passed only to specific back-end storage controllers.

Consider the following points:


򐂰 Version 8.1.2 can also reclaim capacity in DRPs when a host runs SCSI UNMAP commands.
򐂰 By default, Version 8.2.1 does not advertise support for SCSI UNMAP to hosts.
򐂰 In Version 8.3.1, support for the host SCSI UNMAP command is enabled by default.

Before enabling SCSI UNMAP, see SCSI Unmap support in IBM Spectrum Virtualize
systems.

Analyze your storage stack to optimally balance the advantages and costs of data
reclamation.

6.2.8 Encryption
IBM Spectrum Virtualize systems can be configured to enable data-at-rest encryption. This
function is realized in hardware (self-encrypting drives or in serial-attached SCSI (SAS)
controller for drives that do not support self-encryption and are connected through the SAS
bus) or in software (external virtualized storage).

For more information about creating and managing encrypted volumes, see Chapter 12,
“Encryption” on page 701.

6.2.9 Cache mode


Another volume parameter is its cache characteristics. Under normal conditions, a volume’s
read and write data is held in the cache of its preferred node with a mirrored copy of write
data that is held in the partner node of the same I/O group. However, it is possible to create a
volume with different cache characteristics if this configuration is required.

The cache setting of a volume can have the following values:


readwrite All read and write I/O operations that are performed by the volume are
stored in cache. This mode is the default cache mode for all volumes.
readonly Only read I/O operations that are performed on the volume are stored
in cache. Writes to the volume are not cached.

Chapter 6. Volumes 293


disabled No I/O operations on the volume are stored in cache. I/Os are passed
directly to the back-end storage controller rather than being held in the
node’s cache.

Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (logical unit numbers (LUNs)) that are used as
IBM Spectrum Virtualize image mode volumes. However, using IBM Spectrum Virtualize
Copy Services rather than the underlying disk controller copy services provides better results.

Note: Disabling the volume cache is a prerequisite for using native copy services on image
mode volumes that are defined on storage systems that are virtualized by IBM Spectrum
Virtualize. Contact IBM Support before turning off the cache for volumes in your production
environment to avoid performance degradation.

6.2.10 I/O throttling


You can set a limit on the number of I/O operations that are realized by a volume. This
limitation is called I/O throttling or governing.

The limit can be set in terms of number of IOPS or bandwidth (megabytes per second
(MBps), gigabits per second (Gbps), or terabytes per second (TBps)). By default, I/O throttling
is disabled, but each volume can have up to two throttles that are defined: one for bandwidth
and one for IOPS.

When deciding between using IOPS or bandwidth as the I/O governing throttle, consider the
disk access profile of the application that is the primary volume user. Database applications
generally issue large amounts of I/O operations, but transfer a relatively small amount of data.
In this case, setting an I/O governing throttle that is based on bandwidth might not achieve
much. A throttle that is based on IOPS is better suited for this use case.

Conversely, a video streaming application issues a small amount of I/O but transfers large
amounts of data. Therefore, it is better to use a bandwidth throttle for the volume in this case.

An I/O governing rate of 0 does not mean that zero IOPS or bandwidth can be achieved for
this volume; rather, it means that no throttle is set for this volume.

Note: Consider the following points:


򐂰 I/O governing does not affect FlashCopy and data migration I/O rates.
򐂰 I/O governing on MM or GM secondary volumes does not affect the rate of data copy
from the primary volume.

For more information about how to configure I/O throttle on a volume, see 6.5.4, “I/O
throttling” on page 313.

6.2.11 Volume protection


Volume protection prevents volumes or host mappings from being deleted if the system
detects recent I/O activity. This global setting is enabled by default on new systems. You can
either set this value to apply to all volumes that are configured on your system or control
whether the system-level volume protection is enabled or disabled on specific pools.

294 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
There are two levels at which the volume protection must be enabled to be effective: system
level and pool level. Both levels must be enabled for protection to be active on a pool. The
pool-level protection depends on the system-level setting to ensure that protection is applied
consistently for volumes within that pool. If system-level protection is enabled, but pool-level
protection is not enabled, any volumes in the pool can be deleted.

When you enable volume protection at the system level, you specify a period in minutes that
the volume must be idle before it can be deleted. If volume protection is enabled and the
period is not expired, the volume deletion fails even if the -force parameter is used. To
prevent volumes that are configured in a pool from inadvertent deletion, enable volume
protection at the pool level.

The following CLI commands and the corresponding GUI activities are affected by the volume
protection setting:
򐂰 rmvdisk
򐂰 rmvdiskcopy
򐂰 rmvvolume
򐂰 rmvdiskhostmap
򐂰 rmvolumehostclustermap
򐂰 rmmdiskgrp
򐂰 rmhostiogrp
򐂰 rmhost
򐂰 rmhostcluster
򐂰 rmhostport
򐂰 mkrcrelationship

Volume protection can be set from the GUI (new in V.8.3.1, see 6.5.5, “Volume protection” on
page 319) and CLI (see 6.6.9, “Volume protection” on page 355).

6.2.12 Secure data deletion


The system provides methods to securely erase data from a drive or from a boot drive when a
control enclosure is decommissioned.

Secure data deletion effectively erases or overwrites all traces of existing data from a data
storage device. The original data on that device becomes inaccessible and cannot be
reconstructed. You can securely delete data on individual drives and on a boot drive of a
control enclosure. The methods and commands that are used to securely delete data enable
the system to be used in compliance with European Regulation EU2019/424.

The procedure is described in IBM Knowledge Center.

6.3 Virtual volumes


IBM Spectrum Virtualize V7.6 introduced support for virtual volumes. These volumes enable
support for VMware vSphere Virtual Volumes (VVOLs), which allow VMware vCenter to
manage system objects, such as volumes and pools. The IBM Spectrum Virtualize system
administrators can create volume objects of this class, and assign ownership to VMware
administrators to simplify management.

For more information about configuring VVOLs with IBM Spectrum Virtualize, see Configuring
VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize, SG24-8328.

Chapter 6. Volumes 295


6.4 Volumes in multi-site topologies
IBM Spectrum Virtualize can be set up in a multi-site configuration, which makes the system
aware of which system components (I/O groups, nodes, and back-end storage) are at which
site. For defining the storage topology, a site is defined as an independent failure domain,
which means that if one site fails, the other site can continue to operate without disruption.

The sites can be in the same data center room or across rooms in the data center, in buildings
on the same campus, or buildings in different cities, depending on the type and scale of the
failure that the solution must survive.

The following topologies are available:


򐂰 Standard topology, which is intended for single-site configurations that do not allow site
definition and assume that all components of the solution are at a single site. You can use
GM or MM to maintain a copy of a volume on a different system at a remote site, which
can be used for DR.
򐂰 IBM HyperSwap topology, which is a three-site high availability (HA) configuration in which
each I/O group is at a different site. A volume can be active on two I/O groups so that if
one site is not available, it can immediately be accessed through the other site.

Note: Multi-site topologies of IBM Spectrum Virtualize use two sites as component
locations (nodes and back-end storage), and a third site as a location for a tie-breaker
component that is used to resolve split-brain scenarios where the storage system
components lose communication with each other.

The Create Volumes menu provides the following options, depending on the configured
system topology:
򐂰 With standard topology, the available options are Basic, Mirrored, and Custom.
򐂰 With HyperSwap topology, the options are Basic, HyperSwap, and Custom.

The HyperSwap function provides HA volumes that are accessible through two sites up to
300 km (186.4 miles) apart. A fully independent copy of the data is maintained at each site.

Note: The determining factor for HyperSwap configuration validity is the time that it takes
to send the data between the sites. Therefore, while estimating the distance, consider the
fact that the distance between the sites that is measured along the data path is longer than
the geographic distance. Additionally, each device on the data path that adds latency
increases the effective distance between the sites.

When data is written by hosts at either site, both copies are synchronously updated before the
write operation completion is reported to the host. The HyperSwap function automatically
optimizes itself to minimize data that is transmitted between sites and to minimize host read
and write latency.

If the nodes or storage at either site go offline, the HyperSwap function automatically fails
over access to the other copy. The HyperSwap function also automatically resynchronizes the
two copies when possible.

The HyperSwap function is built on a foundation of two earlier technologies: The


Non-Disruptive Volume Move (NDVM) function that was introduced in IBM Spectrum
Virtualize V6.4, and the RC features that include MM, GM, and GMVC.

296 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The HyperSwap volume configuration is possible only after the IBM Spectrum Virtualize
system is configured in the HyperSwap topology. After this topology change, the GUI
presents an option to create a HyperSwap volume and creates them by running the mkvolume
command instead of the mkvdisk command. The GUI continues to use the mkvdisk command
when all other classes of volumes are created.

Note: It is still possible to create HyperSwap volumes in Version 7.5.

For more information, see IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and
VMware Implementation, SG24-8317.

For more information about HyperSwap topology, see IBM Knowledge Center.

From the perspective of a host or a storage administrator, a HyperSwap volume is a single


entity, but it is realized by using four volumes, a set of FlashCopy maps, and an RC
relationship (see Figure 6-9).

Figure 6-9 What makes up a HyperSwap volume

The GUI simplifies the HyperSwap volume creation process by asking about required volume
parameters only and automatically configuring all the underlying volumes, FlashCopy maps,
and volume replications relationships.

An example of a HyperSwap volume configuration is shown in Figure 6-29 on page 312.

Chapter 6. Volumes 297


6.5 Operations on volumes
This section describes how to perform operations on volumes by using the GUI. The following
operations can be performed on a volume:
򐂰 Volumes can be created and deleted.
򐂰 Volumes can have their characteristics modified, including:
– Size (expanding or shrinking)
– Number of copies (adding or removing a copy)
– I/O throttling
– Protection
򐂰 Volumes can be migrated at run time to another MDisk or storage pool.
򐂰 A PiT volume can be created by using FlashCopy. Multiple snapshots and quick restore
from snapshots (reverse FlashCopy) are supported.
򐂰 Volumes can be mapped to (and unmapped from) hosts.

Note: With Version 7.4 and later, it is possible to prevent accidental deletion of volumes if
they recently performed any I/O operations. This feature is called volume protection, and it
prevents active volumes or host mappings from being deleted inadvertently. This process
is done by using a global system setting. For more information, see 6.6.9, “Volume
protection” on page 355 and the “Enabling volume protection” topic in IBM Knowledge
Center.

6.5.1 Creating volumes


This section focuses on using the Create Volumes menu to create Basic and Mirrored
volumes in a system with a standard topology. Volume creation is available with the following
volume classes:
򐂰 Basic
򐂰 Mirrored
򐂰 Custom

To create a volume, complete the following steps:


1. Click the Volumes menu and click the Volumes option of the IBM Spectrum Virtualize
GUI, as shown in Figure 6-10 on page 299.

298 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-10 Volumes menu

A list of volumes, their state, capacity, and associated storage pools are displayed.
2. To create a volume, click Create Volumes, as shown in Figure 6-11.

Figure 6-11 Create Volumes

The Create Volumes tab opens the Create Volumes window, which shows the available
creation methods.

Note: The volume classes that are displayed in the Create Volumes window depend on the
topology of the system.

Chapter 6. Volumes 299


The Create Volumes window for standard topology is shown in Figure 6-12.

Figure 6-12 Basic, Mirrored, and Custom Volume Creation options

Note: Consider the following points:


򐂰 A basic volume is a volume that has only one physical copy, uses storage that is
allocated from a single pool on one site, and uses read/write cache mode.
򐂰 A mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pool.
򐂰 A custom volume (in the context of this menu) is a basic or mirrored volume with values
of some of its parameters that are changed from the defaults.
򐂰 The Create Volumes window also provides (by using the Capacity Savings parameter)
the ability to change the default provisioning of a basic or mirrored volume to
thin-provisioned or compressed. For more information, see “Capacity savings option”
on page 305.

Creating basic volumes


A basic volume is a volume that has only one physical copy. Basic volumes are supported in
any system topology and are common to all configurations. Basic volumes can be of any type
of virtualization: striped, sequential, or image. They can also use any type of capacity
savings: thin-provisioning, compressed, or none. Deduplication can be configured with
thin-provisioned and compressed volumes in DRPs for added capacity savings.

To create a basic volume, click Basic, as shown in Figure 6-13 on page 301. This action
opens the Basic volume menu, where you can define the following parameters:
򐂰 Pool: The Pool in which the volume is created (drop-down menu).
򐂰 Quantity: Number of volumes to be created (numeric up or down).

300 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Capacity: Size of the volume in specified units (drop-down menu).
򐂰 Capacity Savings (drop-down menu):
– None
– Thin-provisioned
– Compressed
򐂰 Name: Name of the volume (cannot start with a number).
򐂰 I/O group.

The Basic Volume creation window is shown in Figure 6-13.

Figure 6-13 Create Volumes window

Define and consistently use a suitable volume naming convention to facilitate easy
identification. For example, a volume name can contain the name of the pool or some tag that
identifies the underlying storage subsystem, the host or cluster name that the volume is
mapped to, and the content of this volume, such as the name of the applications that use the
volume.

Chapter 6. Volumes 301


When all of the characteristics of the basic volume are defined, it can be created by selecting
one of the following options:
򐂰 Create
򐂰 Create and Map

Note: The plus sign (+) icon can be used to create more volumes in the same instance of
the volume creation wizard.

In the example, the Create option was selected. The volume-to-host mapping can be
performed later, as described in 6.5.8, “Mapping a volume to a host” on page 330.

When the operation completes, the volume is seen in the Volumes pane in the state “Online
(formatting)”, as shown in Figure 6-14.

Figure 6-14 Basic volume formatting

By default, the GUI does not show any details about the commands it runs to complete a task.
However, while a command runs you can click View more details to see the underlying CLI
commands that are run to create the volume and a report of completion of the operation, as
shown in Figure 6-15.

Figure 6-15 Viewing more details of volume creation

302 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Consider the following points:
򐂰 Standard-provisioned volumes are automatically formatted through the quick
initialization process after the volume is created. This process makes
standard-provisioned volumes available for use immediately.
򐂰 Quick initialization requires a small amount of I/O to complete, and limits the number of
volumes that can be initialized at the same time. Some volume actions, such as moving,
expanding, shrinking, or adding a volume copy, are disabled when the specified volume
is initializing. Those actions become available after the initialization process completes.
򐂰 The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume. The quick initialization process can also
be disabled for performance testing so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.

For more information, see IBM Knowledge Center and expand Product overview →
Technical overview → Volumes → Standard-provisioned volumes.

Creating mirrored volumes


To create a mirrored volume, complete the following steps:
1. In the Create Volumes window, click Mirrored, and choose the Pool for Copy1 and Copy2
by using the drop-down menus. Although the mirrored volume can be created in the same
pool, this setup is not typical. Generally, keep volumes copies on separate set of physical
disks (pools).
2. Enter the following Volume Details:
– Quantity
– Capacity
– Capacity savings
– Name

Chapter 6. Volumes 303


Leave the I/O group option at its default setting of Automatic (see Figure 6-16).

Figure 6-16 Mirrored Volume creation

3. Click Create (or Create and Map).


When the operation completes, the volume is seen in the Volumes pane in the state
“Online (formatting)”, as shown in Figure 6-17 on page 305.

304 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-17 Mirrored volume formatting

A mirrored volume is displayed in the GUI as configured in the pool in which it has its primary.
In this example, volume itso-mirrored00-Pool0-Pool1 is displayed as configured in Pool0
because it has its primary copy in Pool0.

Note: When creating a mirrored volume by using this menu, you are not required to specify
the Mirrored Sync rate (it defaults to 2 MBps). The synchronization rate can be customized
by using the Custom menu.

Capacity savings option


When the basic or mirrored method of volume creation is used, the GUI provides a Capacity
Savings option, which enables altering the volume provisioning parameters without using the
Custom volume provisioning method. You can select Thin-provisioned or Compressed from
the drop-down menu, as shown in Figure 6-18, to create thin-provisioned or compressed
volumes.

Figure 6-18 Volume Creation with Capacity Saving option

Note: Consider the compression guidelines in Chapter 9, “Advanced features for storage
efficiency” on page 463 before creating the first compressed volume copy on a system.

When a thin-provisioned or compressed volume is defined in a DRP, the Deduplicated option


becomes available. Select this option to enable deduplication of the volume.

Chapter 6. Volumes 305


Thin-provisioned and compressed volumes have a special icon in the Capacity column of the
Volumes menu that makes it easy to distinguish them, as shown in Figure 6-19.

Figure 6-19 Space-efficient volumes icon

Volume iso-thin00-Pool0 is thin-provisioned. Volume iso-thin01-Pool1 is compressed. There is


no icon indicating whether a volume is deduplicated or not.

6.5.2 Creating custom volumes


The Create Volumes window also enables Custom volume creation that expands the set of
options for volume creation that are available to the administrator.

The Custom menu consists of several panes:


򐂰 Volume Location: Mandatory. It defines the number of volume copies, pools to be used,
and I/O group preferences.
򐂰 Volume Details: Mandatory. It defines the Capacity savings option.
򐂰 Thin Provisioning: Enables configuration of thin-provisioning settings if this capacity saving
option is selected.
򐂰 Compressed: Enables configuration of compression settings if this capacity saving option
is selected.
򐂰 General: Configures cache mode and formatting.
򐂰 Summary.

Use these panes to customize your Custom volume as wanted, and then commit these
changes by clicking Create.

You can mix and match settings on different panes to achieve the final volume configuration
that meets your requirements.

Volume Location pane


The Volume Location pane is shown in Figure 6-20.

Figure 6-20 Volume Location pane

306 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
This pane has the following options:
򐂰 Volume copy type: You can choose between None (single volume copy) and Mirrored
(two volume copies).
򐂰 Pool: Specifies storage pool to use for each of volume copies.
򐂰 Mirror sync rate: You can set the mirror sync rate for the volume copies. This option is
displayed only for the Mirrored volume copy type, and you can set the volume copy
synchronization rate to a value 128 KiBps - 64 MiBps.
򐂰 Caching I/O group: You can choose between Automatic (allocated by the system) and
manually specifying the I/O group.
򐂰 Preferred node: You can choose between Automatic (allocated by the system) and
manually specifying the preferred node for the volume.
򐂰 Accessible I/O groups: You can choose between Only the caching I/O group and All.

Volume Details pane


The Volume Details pane is shown in Figure 6-21.

Figure 6-21 Volume Details pane

This pane has the following options:


򐂰 Quantity: You can specify how many volumes are created.
򐂰 Capacity: Capacity of the volume.
򐂰 Name: You can define the volume name.
򐂰 Capacity savings: You can choose between None (standard-provisioned volume),
Thin-provisioned, and Compressed.
򐂰 Deduplicated: Thin-provisioned and compressed volumes that are created in a DRP can
be deduplicated.

Chapter 6. Volumes 307


If you click Define another volume, the GUI displays a subpane in which you can define the
configuration of another volume, as shown in Figure 6-22.

Figure 6-22 Volume Details pane with two volume subpanes

By using this way, you can create volumes with different characteristics in a single invocation
of the volume creation wizard.

Thin Provisioning pane


If you choose to create a thin-provisioned volume, a Thin Provisioning pane is displayed, as
shown in Figure 6-23.

Figure 6-23 Custom volume creation: Thin Provisioning pane

This pane includes the following options:


򐂰 Real capacity: Real capacity of the volume, which is specified as a percentage of the
virtual capacity or in bytes.
򐂰 Automatically expand: Whether to automatically expand the real capacity of the volume if
needed. Defaults to Enabled.
򐂰 Warning threshold: Whether a warning message is sent and at what percentage of filled
virtual capacity. Defaults to Enabled, with a warning threshold set at 80%.
򐂰 Thin-Provisioned Grain Size: You can define the grain size for the thin-provisioned volume.
Defaults to 256 KiB.

308 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Important: If you do not use the autoexpand feature, the volume goes offline if it
receives a write request after all real capacity is allocated.

The default grain size is 256 KiB. The optimum choice of grain size depends on the
volume use type. Consider the following points:
򐂰 If you are not going to use the thin-provisioned volume as a FlashCopy source or
target volume, use 256 KiB to maximize performance.
򐂰 If you are going to use the thin-provisioned volume as a FlashCopy source or target
volume, specify the same grain size for the volume and for the FlashCopy function.
򐂰 If you plan to use Easy Tier with thin-provisioned volumes, see the IBM Support
article Performance Problem When Using Easy Tier With Thin Provisioned
Volumes.

Compressed pane
If you choose to create a compressed volume, a Compressed pane is displayed, as shown in
Figure 6-24.

Figure 6-24 Compressed pane

This pane gives the following options:


򐂰 Real capacity: Real capacity of the volume, which is specified as percentage of the virtual
capacity or in bytes.
򐂰 Automatically expand: Whether to automatically expand the real capacity of the volume if
needed. Defaults to Enabled.
򐂰 Warning threshold: Whether a warning message is sent, and at what percentage of filled
virtual capacity. Defaults to Enabled, with a warning threshold set at 80%.

You cannot specify the grain size for a compressed volume.

Note: Consider the compression guidelines in Chapter 9, “Advanced features for storage
efficiency” on page 463 before creating the first compressed volume copy on a system.

Chapter 6. Volumes 309


General pane
The General pane is shown in Figure 6-25.

Figure 6-25 Custom volume creation: General pane

This pane gives the following options:


򐂰 Format volume: Controls whether the volume is formatted before being made available.
Defaults to Enabled.
򐂰 Cache mode: Controls volume caching. Defaults to Enabled. Other available options are
Read-only and Disabled.
򐂰 OpenVMS unit device identifier (UDID): Each OpenVMS Fibre Channel (FC)-attached
volume requires a user-defined identifier or UDID. A UDID is a nonnegative integer that is
used in the creation of the OpenVMS device name.

6.5.3 HyperSwap volumes


To create a HyperSwap volume, complete the following steps:
1. In the IBM Spectrum Virtualize GUI, select Volumes → Volumes, as shown in
Figure 6-26.

Figure 6-26 Volumes menu

310 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A list of volumes, their state, capacity, and associated storage pools, is displayed.
2. Click Create Volumes, as shown in Figure 6-27.

Figure 6-27 Create Volumes button

The Create Volumes tab opens the Create Volumes window, which displays available creation
methods.

Note: The volume classes that are displayed in the Create Volumes window depend on the
topology of the system.

The Create Volumes window for the standard topology is shown in Figure 6-28.

Figure 6-28 Basic, Mirrored, and Custom Volume Creation options

Chapter 6. Volumes 311


Note: Consider the following points:
򐂰 A basic volume is a volume that has only one physical copy, uses storage that is
allocated from a single pool on one site, and uses read/write cache mode.
򐂰 A mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pool.
򐂰 A custom volume (in the context of this menu) is a basic or mirrored volume with values
of some of its parameters that are changed from the defaults.
򐂰 The Create Volumes window also provides (by using the Capacity Savings parameter)
the ability to change the default provisioning of a basic or mirrored volume to
thin-provisioned or compressed. For more information, see “Capacity savings option”
on page 305.

The notable difference between HyperSwap volume and basic volume creation is that in the
HyperSwap Details the system prompts for storage pool names at each site. The system uses
its topology awareness to map storage pools to sites, which ensures that the data is correctly
mirrored across locations.

As shown in Figure 6-29, a single volume is created with volume copies in sites site1 and
site2. This volume is in an active-active (MM) relationship with extra resilience that is provided
by two change volumes.

Figure 6-29 HyperSwap Volume creation window

312 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After the volume is created, it is visible in the volumes list, as shown in Figure 6-30.

Figure 6-30 A HyperSwap volume that is visible in the Volumes list

The Pool column shows the value “Multiple”, which indicates that a volume is a HyperSwap
volume. A volume copy at each site is visible, and the change volumes that are used by the
technology are not displayed in this GUI view.

Note: For volumes in multi-site topologies, the asterisk (*) does not indicate the primary
copy, but the local volume copy that is used for data reads.

A single mkvolume command can create a HyperSwap volume. Up to IBM Spectrum Virtualize
V7.5, this process required careful planning and running the following sequence of
commands:
1. mkvdisk master_vdisk
2. mkvdisk aux_vdisk
3. mkvdisk master_change_volume
4. mkvdisk aux_change_volume
5. mkrcrelationship –activeactive
6. chrcrelationship -masterchange
7. chrcrelationship -auxchange
8. addvdiskacces

6.5.4 I/O throttling


This section describes how to use I/O throttling on a volume.

Chapter 6. Volumes 313


Defining a volume throttle
To set a volume throttle, complete the following steps:
1. Select Volumes → Volumes, and then select the volume that you want to throttle. Select
Actions → Edit Throttle, as shown in Figure 6-31.

Figure 6-31 Edit Throttle menu

314 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. In the Edit Throttle window, define the throttle in terms of number of IOPS or bandwidth. In
our example, we set an IOPS throttle of 10,000, as shown in Figure 6-32. Click Create.

Figure 6-32 IOPS throttle on a volume

3. After the Edit Throttle task completes successfully, the Edit Throttle window opens again.
You can now set the throttle based on the different metrics, modify the throttle, or close the
window without performing further actions by clicking Close.

Chapter 6. Volumes 315


Listing volume throttles
To view volume throttles, select Volumes → Volumes, and then select Actions → View All
Throttles, as shown in Figure 6-33.

Figure 6-33 View All Throttles menu

The View All Throttles menu shows all volume throttles that are defined in the system, as
shown in Figure 6-34 on page 317.

316 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-34 View All Throttles window

You can view other throttles by selecting a different throttle type in the drop-down menu, as
shown in Figure 6-35.

Figure 6-35 Filtering the throttle type

Chapter 6. Volumes 317


Modifying or removing a volume throttle
To remove a volume throttle, complete the following steps:
1. From the Volumes menu, select the volume that is attached the throttle that you want to
remove. Select Actions → Edit Throttle, as shown in Figure 6-36.

Figure 6-36 Edit Throttle menu

2. In the Edit Throttle window, click Remove for the throttle that you want to remove. As
shown in Figure 6-37, we remove the IOPS throttle from the volume.

Figure 6-37 Removing a throttle

318 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After the Edit Throttle task completes successfully, the Edit Throttle window opens again. You
can now set the throttle based on the different metrics, modify the throttle, or close the
window without performing any action by clicking Close.

6.5.5 Volume protection


To configure volume protection, select Settings → System → Volume Protection, as shown
in Figure 6-38.

Figure 6-38 Volume Protection configuration

In this view, you can configure system-wide volume protection (enabled by default), set the
minimum inactivity period that is required to allow volume deletion (protection duration), and
configure volume protection for each configured pool (enabled by default). In the example,
volume protection is enabled with the 15-minute minimum inactivity period and is turned on
for all configured pools.

6.5.6 Modifying a volume


After a volume is created, it is possible to modify many of its characteristics. The following
sections show how to perform those configuration changes.

Chapter 6. Volumes 319


Shrinking
To shrink a volume, complete the following steps:
1. Ensure that you have a current and verified backup of any in-use data that is stored on the
volume that you intend to shrink.
2. From the Volumes menu, select the volume that you want to shrink. Select Actions →
Shrink…, as shown in Figure 6-39.

Figure 6-39 Volume Shrink menu

3. Specify either Shrink by or Final size (the other choice is calculated automatically), as
shown in Figure 6-40 on page 321.

320 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-40 Specifying the size of the shrunk volume

Note: The storage system reduces the volume capacity by removing one or more
arbitrarily selected extents. Do not shrink a volume that contains data that is being used
unless you have a current and verified backup of the data.

4. Click OK to confirm the action, as shown in Figure 6-41.

Figure 6-41 Confirming the volume shrinking

Chapter 6. Volumes 321


5. After the operation completes, you can see the volume with the new size by selecting
Volumes → Volumes, as shown in Figure 6-42.

Figure 6-42 Shrunk volume size

Expanding
To expand a volume, complete the following steps:
1. From the Volumes menu, select the volume that you want to expand. Select Actions →
Expand…, as shown in Figure 6-43.

Figure 6-43 Volume Expand menu

2. Specify either Expand by: or Final size: (the other value is calculated automatically) and
click Expand, as shown in Figure 6-44 on page 323.

322 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-44 Specifying the expanded volume size

3. After the operation completes (including the formatting of the extra space), you can see
the volume with the new size by selecting Volumes → Volumes, as shown in Figure 6-45.

Figure 6-45 Expanded volume size

Note: Expanding a volume is not sufficient to increase the available space that is visible
to the host. The host must become aware of the changed volume size at the OS level,
for example, through a bus rescan. More operations at the logical volume manager
(LVM) or file system levels might be needed before more space is visible to applications
running on the host.

Chapter 6. Volumes 323


Modifying capacity savings
This action is available only for space-efficient volumes. To modify capacity savings options
for a volume, complete the following steps:
1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Modify Capacity Savings…, as shown in Figure 6-46.

Figure 6-46 Modify Capacity Savings menu

2. Select the capacity savings option that you want, as shown in Figure 6-47 on page 325.

324 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-47 Capacity savings option for a volume

3. For volumes that are configured in a DRP, it is possible to enable deduplication, as shown
in Figure 6-48.

Figure 6-48 Enabling deduplication on a volume

After you configure the capacity savings options of a volume, click Modify to apply them.
When the operation completes, you are returned to the Volumes view.

Chapter 6. Volumes 325


Modifying the mirror sync rate
This action is available only for mirrored volumes. To modify the mirror sync rate of a volume,
complete the following steps:
1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Modify Mirror Sync Rate…, as shown in Figure 6-49.

Figure 6-49 Modify Mirror Sync Rate menu

2. Select the mirror sync rate from the list. Available values are 0 KBps - 65 MBps. Click
Modify to set the mirror sync rate for the volume, as shown in Figure 6-50.

Figure 6-50 Setting the mirror sync rate

326 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When the operation completes, you are returned to the Volumes view.

Changing the volume cache mode


To change the volume cache mode, complete the following steps:
1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Cache Mode…, as shown in Figure 6-51.

Figure 6-51 Modifying the volume cache mode

2. Select the cache mode that you want for the volume from the drop-down list and click OK,
as shown in Figure 6-52.

Figure 6-52 Setting the volume cache mode

Chapter 6. Volumes 327


When the operation completes, you are returned to the Volumes view.

Modify OpenVMS UDID menu


To change the OpenVMS UDID, use the Modify OpenVMS UDID menu.

A UDID is a nonnegative integer that is used in the creation of the OpenVMS device name.
All fibre-attached volumes have an allocation class of $1$, followed by the letters DGA, and
then followed by the UDID. All storage unit LUNs that you assign to an OpenVMS system
need an UDID so that the OS can detect and name the device. LUN 0 must also have a UDID,
but the system displays LUN 0 as $1$GGA<UDID>, not as $1$DGA<UDID>. For more
information about fibre-attached storage devices, see Guidelines for OpenVMS Cluster
Configurations.

To change a volume OpenVMS UDID, complete the following steps:


1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Modify Open VMS UDID…, as shown in Figure 6-53.

Figure 6-53 Modify Open VMS UDID menu

2. Specify the UDID for the volume and click Modify, as shown in Figure 6-54 on page 329.

328 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-54 Setting the volume UDID

When the operation completes, you are returned to the Volumes view.

6.5.7 Deleting a volume


When you try to delete a volume, the system verifies whether it is a part of a host mapping,
FlashCopy mapping, or remote-copy relationship. If any of these mappings exists, the delete
attempt fails unless the -force parameter is specified on the corresponding remove
commands. If volume protection is enabled, a delete fails if the system finds a recent I/O
activity to the volume, even if the -force parameter is specified. The -force parameter
overrides the volume dependencies, not the volume protection setting.

To delete a volume, complete the following steps:


1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Delete, as shown in Figure 6-55.

Figure 6-55 Volume Delete menu item

Chapter 6. Volumes 329


2. Review the list of volumes that is selected for deletion and provide the number of volumes
that you intend to delete, as shown in Figure 6-56. Click Delete to remove the volume from
the system configuration.

Figure 6-56 Confirming the volume deletion

When the operation completes, you are returned to the Volumes view.

6.5.8 Mapping a volume to a host


To make a volume available to a host or cluster of hosts, it must be mapped. A volume can be
mapped to the host at creation time or later.

To map a volume to a host or cluster, complete the following steps:


1. From the Volumes menu, select the volume that you want to modify. Select Actions →
Map to Host or Host Cluster…, as shown in Figure 6-57 on page 331.

330 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-57 Volume mapping menu item

Tip: An alternative way of opening the Actions menu is to highlight (select) a volume
and right-click.

2. The Create Mapping window opens. In this window, select whether to create a mapping to
a host or host cluster. The list of objects of the appropriate type is displayed. Select to
which hosts or host clusters the volume should be mapped.
You can either allow the storage system to assign the SCIS LUN ID to the volume by
selecting the System Assign option, or select Self Assign and provide the LUN ID
yourself. Click Next to proceed to the next step.

Chapter 6. Volumes 331


In Figure 6-58, a single volume is mapped to a host and the storage system assigns the
SCSI LUN IDs.

Figure 6-58 Mapping a volume to a host

3. A summary window opens and shows all the volume mappings for the selected host. The
new mapping is highlighted, as shown in Figure 6-59 on page 333. Review the future
configuration state and click Map Volumes to map the volume.

332 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 6-59 Mapping a volume to host cluster summary

4. After the task completes, the wizard returns to the Volumes window. You can list the
volumes that are mapped to the host by selecting Hosts → Mappings, as shown in
Figure 6-60.

Figure 6-60 Accessing the Hosts Mapping menu

Chapter 6. Volumes 333


A window with a list of volumes that are mapped to all hosts opens, as shown in
Figure 6-61.

Figure 6-61 List of volume to host mappings

To see volumes that are mapped to clusters instead of hosts, change the value that is
shown in the upper left corner (see Figure 6-61) from Private Mappings to Shared
Mappings.

Note: You can use the filter to display only the hosts or volumes that you want to see.

The host can now access the mapped volume. For more information about discovering the
volumes on the host, see Chapter 7, “Hosts” on page 369.

6.5.9 Migrating a volume to another storage pool


IBM Spectrum Virtualize enables online volume migration with no applications downtime.
Volumes can be moved between storage pools without affecting business workloads that are
running on these volumes.

There are two ways to perform volume migration: by using the volume migration feature and
by creating a volume copy.

Volume migration by using the migration feature


The migration process is a low priority process that does not affect the performance of the
IBM Spectrum Virtualize system. However, as subsequent volume extents are moved to the
new storage pool, the performance of the volume is determined more by the characteristics of
the new storage pool.

Note: You cannot move a volume copy that is compressed to an I/O group that contains at
least one node that does not support compressed volumes.

334 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To migrate a volume to another storage pool, complete the following steps:
1. In the Volumes menu, highlight the volume that you want to migrate. Select Actions →
Migrate to Another Pool…, as shown in Figure 6-62.

Figure 6-62 Migrate Volume Copy window: Select menu option

Chapter 6. Volumes 335


2. The Migrate Volume Copy window opens. If your volume consists of more than one copy,
select the copy that you want to migrate to another storage pool, as shown in Figure 6-63.
If the selected volume consists of one copy, the volume copy selection pane is not
displayed.

Figure 6-63 Migrate Volume Copy: Selecting the volume copy

3. Select the new target storage pool and click Migrate, as shown in Figure 6-63. The Select
a Target Pool pane displays the list of all pools that are a valid migration copy target for the
selected volume copy.
4. You are returned to the Volumes view. The time that it takes for the migration process to
complete depends on the size of the volume. The status of the migration can be monitored
by selecting Monitoring → Background Tasks, as shown in Figure 6-64.

Figure 6-64 Migration progress

336 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After the migration task completes, the completed migration task is visible in the Recently
Completed Task pane of the Background Tasks menu, as shown in Figure 6-65.

Figure 6-65 Migration complete

In the Volumes → Volumes menu, the volume copy is now displayed in the target storage
pool, as shown in Figure 6-66.

Figure 6-66 Volume copy after migration

The volume copy is now migrated without any host or application downtime to the new
storage pool.

Another way to migrate single-copy volumes to another pool is to use the volume copy
feature, as described in “Volume migration by adding a volume copy” on page 338.

Note: Migrating a volume between storage pools with different extent sizes is not
supported. If you must migrate a volume to a storage pool with a different extent size, use
the volume migration by adding a volume copy method.

Chapter 6. Volumes 337


Volume migration by adding a volume copy
IBM Spectrum Virtualize supports creating, synchronizing, splitting, and deleting volume
copies. A combination of these tasks can be used to migrate volumes to other storage pools.

The easiest way to migrate volume copies is to use the migration feature that is described in
6.5.9, “Migrating a volume to another storage pool” on page 334. However, in some use
cases, the preferred or only method of volume migration is to create a copy of the volume in
the target storage pool and then remove the old copy.

Note: You can specify storage efficiency characteristics of the new volume copy differently
than the ones of the primary copy. For example, you can make a thin-provisioned copy of a
standard-provisioned volume.

This volume migration option can be used only for single-copy volumes. If you need to
move a copy of a mirrored volume by using this method, you must delete one of the volume
copies first and then create a copy in the target storage pool. This process causes a
temporary loss of redundancy while the volume copies synchronize.

To migrate a volume by using the volume copy feature, complete the following steps:
1. Select the volume that you want to move, and select Actions → Add Volume Copy, as
shown in Figure 6-67.

Figure 6-67 Adding the volume copy

338 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Create a second copy of your volume in the target storage pool, as shown in Figure 6-68.
In our example, a compressed copy of the volume is created in target pool Pool2. Click
Add. The Deduplication option is not available if either of the volume copies is not in a
DRP.

Figure 6-68 Defining the new volume copy

Wait until the copies are synchronized, as shown in Figure 6-69.

Figure 6-69 Verifying that the volume copies are synchronized

Chapter 6. Volumes 339


3. Change the roles of the volume copies by making the new copy the primary copy, as
shown in Figure 6-70. The current primary copy is displayed with an asterisk next to its
name.

Figure 6-70 Setting the volume copy in the target storage pool as the primary copy

4. Split or delete the volume copy in the source pool, as shown in Figure 6-71.

Figure 6-71 Deleting the volume copy in the source pool

5. Confirm the removal of the volume copy, as shown in Figure 6-72.

Figure 6-72 Confirming the deletion of a volume copy

340 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6. The Volumes view now shows that the volume has a single copy in the target pool, as
shown in Figure 6-73.

Figure 6-73 Volume copy in the target storage pool

Migrating volumes by using the volume copy feature requires more user interaction, but might
be a preferred option for particular use cases. One such example is migrating a volume from
a tier 1 storage pool to a lower performance tier 2 storage pool.

First, the volume copy feature can be used to create a copy in the tier 2 pool (steps 1 and 2).
All reads are still performed in the tier 1 pool to the primary copy. After the volume copies are
synchronized (step 3), all writes are destaged to both pools, but the reads are still only done
from the primary copy.

To test the performance of the volume in the new pool, switch the roles of the volume copies
making the new copy the primary (step 4). If the performance is acceptable, the volume copy
in tier 1 can be split or deleted. If the testing of the tier 2 pool shows unsatisfactory
performance, the old copy can be made the primary again to switch volume reads back to the
tier 1 copy.

6.6 Volume operations by using the CLI


This section describes how to perform various volume configuration and administrative tasks
by using the CLI.

For more information about how to set up CLI access, see Appendix B, “Command-line
interface setup” on page 871.

6.6.1 Displaying volume information


To display information about all volumes that are defined within the IBM Spectrum Virtualize
environment, run the lsvdisk command. To display more information about a specific volume,
run the command again and provide the volume name or the volume ID as the command
parameter, as shown in Example 6-1.

Example 6-1 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk -delim ' '
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change compressed_copy_count parent_mdisk_grp_id parent_mdisk_grp_name formatting
encrypt volume_id volume_name function
0 A_MIRRORED_VOL_1 0 io_grp0 online many many 10.00GB many
6005076400F580049800000000000002 0 2 empty 0 no 0 many many no yes 0 A_MIRRORED_VOL_1
1 COMPRESSED_VOL_1 0 io_grp0 online 1 Pool1 15.00GB striped
6005076400F580049800000000000003 0 1 empty 0 no 1 1 Pool1 no yes 1 COMPRESSED_VOL_1
2 vdisk0 0 io_grp0 online 0 Pool0 10.00GB striped 6005076400F580049800000000000004 0 1
empty 0 no 0 0 Pool0 no yes 2 vdisk0

Chapter 6. Volumes 341


3 THIN_PROVISION_VOL_1 0 io_grp0 online 0 Pool0 100.00GB striped
6005076400F580049800000000000005 0 1 empty 1 no 0 0 Pool0 no yes 3 THIN_PROVISION_VOL_1
4 COMPRESSED_VOL_2 0 io_grp0 online 1 Pool1 30.00GB striped
6005076400F580049800000000000006 0 1 empty 0 no 1 1 Pool1 no yes 4 COMPRESSED_VOL_2
5 COMPRESS_VOL_3 0 io_grp0 online 1 Pool1 30.00GB striped
6005076400F580049800000000000007 0 1 empty 0 no 1 1 Pool1 no yes 5 COMPRESS_VOL_3
6 MIRRORED_SYNC_RATE_16 0 io_grp0 online many many 10.00GB many
6005076400F580049800000000000008 0 2 empty 0 no 0 many many no yes 6 MIRRORED_SYNC_RATE_16
7 THIN_PROVISION_MIRRORED_VOL 0 io_grp0 online many many 10.00GB many
6005076400F580049800000000000009 0 2 empty 2 no 0 many many no yes 7
THIN_PROVISION_MIRRORED_VOL
8 Tiger 0 io_grp0 online 0 Pool0 10.00GB striped 6005076400F580049800000000000010 0 1
not_empty 0 no 0 0 Pool0 yes yes 8 Tiger
12 vdisk0_restore 0 io_grp0 online 0 Pool0 10.00GB striped
6005076400F58004980000000000000E 0 1 empty 0 no 0 0 Pool0 no yes 12 vdisk0_restore
13 vdisk0_restore1 0 io_grp0 online 0 Pool0 10.00GB striped
6005076400F58004980000000000000F 0 1 empty 0 no 0 0 Pool0 no yes 13 vdisk0_restore1

6.6.2 Creating a volume


Running the mkvdisk command creates sequential, striped, or image mode volumes. When
they are mapped to a host object, these objects are seen as disk drives on which the host can
perform I/O operations.

Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.

You must know the following information before you start to create the volume:
򐂰 In which storage pool the volume will have its extents.
򐂰 From which I/O group the volume will be accessed.
򐂰 Which IBM Spectrum Virtualize node will be the preferred node for the volume.
򐂰 Size of the volume.
򐂰 Name of the volume.
򐂰 Type of the volume.
򐂰 Whether this volume is to be managed by IBM Easy Tier to optimize its performance.

When you are ready to create your striped volume, run the mkvdisk command. The command
that is shown in Example 6-2 creates a 10 GB striped volume with volume ID 8 within the
storage pool Pool0 and assigns it to the I/O group io_grp0. Its preferred node is node 1.

Example 6-2 The mkvdisk command


IBM_Storwize:ITSO:superuser>mkvdisk -mdiskgrp Pool0 -iogrp io_grp0 -size 10 -unit gb -name
Tiger
Virtual Disk, id [8], successfully created

To verify the results, run the lsvdisk command and provide the volume ID as the command
parameter, as shown in Example 6-3.

Example 6-3 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk 8
id 8
name Tiger
IO_group_id 0
IO_group_name io_grp0

342 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 10.00GB
type striped
formatted no
formatting yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076400F580049800000000000010
preferred_node_id 2
fast_write_state not_empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
File system
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
owner_type none
owner_id
owner_name
encrypt yes
volume_id 8
volume_name Tiger
function
throttle_id
throttle_name
IOPs_limit
bandwidth_limit_MB
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped

Chapter 6. Volumes 343


mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction
0.00MB

The required tasks to create a volume are complete.

6.6.3 Creating a thin-provisioned volume


Example 6-4 shows an example of creating a thin-provisioned volume, which requires the
following parameters to be specified:
-rsize This parameter makes the volume a thin-provisioned volume. If this
parameter is missing, the volume is created as standard-provisioned.
-autoexpand This parameter specifies that thin-provisioned volume copies
automatically expand their real capacities by allocating new extents
from their storage pool (optional).
-grainsize This parameter sets the grain size in kilobytes (KB) for a
thin-provisioned volume (optional).

Example 6-4 Running the mkvdisk command


IBM_Storwize:ITSO:superuser>mkvdisk -mdiskgrp Pool0 -iogrp 0 -vtype striped -size 10 -unit
gb -rsize 50% -autoexpand -grainsize 256
Virtual Disk, id [9], successfully created

This command creates a thin-provisioned volume with 10 GB of virtual capacity. The volume
belongs to the storage pool that is named Site1_Pool and is owned by input/output (I/O)
Group io_grp0. The real capacity automatically expands until the real volume size of 10 GB is
reached. The grain size is set to 256 KB, which is the default.

344 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Disk size: When the -rsize parameter is used to specify the real physical capacity of
a thin-provisioned volume, the following options are available to specify the physical
capacity: disk_size, disk_size_percentage, and auto.

Use the disk_size_percentage option to define initial real capacity by using a percentage
of the disk’s virtual capacity that is defined by the -size parameter. This option takes as
a parameter an integer, or an integer that is immediately followed by the percent (%)
symbol.

Use the disk_size option to directly specify the real physical capacity by specifying its size
in the units that are defined by using the -unit parameter (the default unit is MB). The
-rsize value can be greater than, equal to, or less than the size of the volume.

The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.

An entry of 1 GB uses 1,024 MB.

6.6.4 Creating a volume in image mode


Use an image mode volume to bring a non-virtualized disk (for example, from a
pre-virtualization environment) under the control of the IBM Spectrum Virtualize system. After
it is managed by the system, you can migrate the volume to the standard MDisk.

When an image mode volume is created, it directly maps to the thus far unmanaged MDisk
from which it is created. Therefore, except for a thin-provisioned image mode volume, the
volume’s LBA x equals MDisk LBA x.

Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0) and
always occupies at least one extent.

You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.

Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks.
The remaining space on the larger MDisk is inaccessible.

If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.

Running the mkvdisk command to create an image mode volume is shown in Example 6-5.

Example 6-5 The mkvdisk (image mode) command


IBM_2145:ITSO_CLUSTER:superuser>mkvdisk -mdiskgrp ITSO_Pool1 -iogrp 0 -mdisk mdisk25 -vtype
image -name Image_Volume_A
Virtual Disk, id [6], successfully created

As shown in this example, an image mode volume that is named Image_Volume_A is created
that uses the mdisk25 MDisk. The MDisk is moved to the storage pool ITSO_Pool1, and the
volume is owned by the I/O group io_grp0.

Chapter 6. Volumes 345


If you run the lsvdisk command, it shows a volume that is named Image_Volume_A with the
type image, as shown in Example 6-6.

Example 6-6 The lsvdisk command


IBM_2145:ITSO_CLUSTER:superuser>lsvdisk -filtervalue type=image
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change compressed_copy_count parent_mdisk_grp_id
parent_mdisk_grp_name formatting encrypt volume_id volume_name function
6 Image_Volume_A 0 io_grp0 online 5 ITSO_Pool1 1.00GB
image 6005076801FE80840800000000000021 0 1
empty 0 no 0 5
ITSO_Pool1 no no 6 Image_Volume_A

6.6.5 Adding a volume copy


You can add a copy to a volume. If volume copies are defined on different MDisks, the volume
remains accessible, even when the MDisk on which one of its copies depends becomes
unavailable. You can also create a copy of a volume on a dedicated MDisk by creating an
image mode copy of the volume. Although volume copies can increase the availability of data,
they are not separate objects.

Volume mirroring can be also used as an alternative method of migrating volumes between
storage pools.

To create a copy of a volume, run the addvdiskcopy command. This command creates a copy
of the chosen volume in the specified storage pool, which changes a non-mirrored volume
into a mirrored one.

The following scenario shows how to create a copy of a volume in a different storage pool. As
shown in Example 6-7, the volume initially has a single copy with copy_id 0 that is provisioned
in pool Pool0.

Example 6-7 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk 2
id 2
name vdisk0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 10.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076400F580049800000000000004
preferred_node_id 2
fast_write_state empty
cache readonly
udid

346 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
File system
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
owner_type none
owner_id
owner_name
encrypt yes
volume_id 2
volume_name vdisk0
function
throttle_id
throttle_name
IOPs_limit
bandwidth_limit_MB
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise

Chapter 6. Volumes 347


tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB

Example 6-8 shows adding the second volume copy by running the addvdiskcopy command.

Example 6-8 The addvdiskcopy command


IBM_Storwize:ITSO:superuser>addvdiskcopy -mdiskgrp Pool1 -vtype striped -unit gb vdisk0
Vdisk [2] copy [1] successfully created

During the synchronization process, you can see the status by running the
lsvdisksyncprogress command.

As shown in Example 6-9, the first time that the status is checked, the synchronization
progress is at 48%, and the estimated completion time is 161026203918. The estimated
completion time is displayed in the YYMMDDHHMMSS format. In our example, it is 2016,
Oct-26 [Link]. When the command is run again, the progress status is at 100%, and the
synchronization is complete.

Example 6-9 Synchronization


IBM_Storwize:ITSO:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
2 vdisk0 1 0 171018232305
IBM_Storwize:ITSO:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
2 vdisk0 1 100

As shown in Example 6-10, the new volume copy (copy_id 1) was added and appears in the
output of the lsvdisk command.

Example 6-10 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk vdisk0
id 2
name vdisk0
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 10.00GB
type many
formatted yes
formatting no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name

348 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
vdisk_UID 6005076400F580049800000000000004
preferred_node_id 2
fast_write_state empty
cache readonly
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
File system
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id many
parent_mdisk_grp_name many
owner_type none
owner_id
owner_name
encrypt yes
volume_id 2
volume_name vdisk0
function
throttle_id
throttle_name
IOPs_limit
bandwidth_limit_MB
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced

Chapter 6. Volumes 349


tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB

copy_id 1
status online
sync yes
auto_delete no
primary no
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB

When adding a volume copy, you can define it with different parameters than the original
volume copy. For example, you can create a thin-provisioned copy of a standard-provisioned
volume to migrate a thick-provisioned volume to a thin-provisioned volume. The migration can
be also done in the opposite direction.

350 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Volume copy mirror parameters: To change the parameters of a volume copy, you must
delete the volume copy and redefine it with the new values.

In Example 6-11, the volume name is changed from VOL_NO_MIRROR to VOL_WITH_MIRROR.

Example 6-11 Volume name changes


IBM_Storwize:ITSO:superuser>chvdisk -name VOL_WITH_MIRROR VOL_NO_MIRROR
IBM_Storwize:ITSO:superuser>

Using the -autodelete flag to migrate a volume


This section shows how to run the addvdiskcopy command with the -autodelete flag set.
The -autodelete flag causes the primary copy to be deleted after the secondary copy is
synchronized.

Example 6-12 shows a shortened lsvdisk output for an uncompressed volume with a single
volume copy.

Example 6-12 An uncompressed volume


IBM_Storwize:ITSO:superuser>lsvdisk UNCOMPRESSED_VOL
id 9
name UNCOMPRESSED_VOL
IO_group_id 0
IO_group_name io_grp0
status online
...

copy_id 0
status online
sync yes
auto_delete no
primary yes
...
compressed_copy no
...

Example 6-13 adds a compressed copy with the -autodelete flag set.

Example 6-13 Compressed copy


IBM_Storwize:ITSO:superuser>addvdiskcopy -autodelete -rsize 2 -mdiskgrp 0 -compressed
UNCOMPRESSED_VOL
Vdisk [9] copy [1] successfully created

Example 6-14 shows the lsvdisk output with another compressed volume (copy 1) and
volume copy 0 being set to auto_delete yes.

Example 6-14 The lsvdisk command output


IBM_Storwize:ITSO:superuser>lsvdisk UNCOMPRESSED_VOL
id 9
name UNCOMPRESSED_VOL
IO_group_id 0
IO_group_name io_grp0
status online
...
compressed_copy_count 2

Chapter 6. Volumes 351


...

copy_id 0
status online
sync yes
auto_delete yes
primary yes
...

copy_id 1
status online
sync no
auto_delete no
primary no
...

When copy 1 is synchronized, copy 0 is deleted. You can monitor the progress of volume copy
synchronization by running the lsvdisksyncprogress command.

6.6.6 Splitting a mirrored volume


Running the splitvdiskcopy command creates an independent volume in the specified I/O
group from a volume copy of the specified mirrored volume. In effect, the command changes
a volume with two copies into two independent volumes, each with a single copy.

If the copy that you are splitting is not synchronized, you must use the -force parameter. If
you are attempting to remove the only synchronized copy of the source volume, the command
fails. However, you can run the command when either copy of the source volume is offline.

Example 6-15 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a volume that is named SPLIT_VOL from a copy with ID 1 of the volume that is
named VOLUME_WITH_MIRRORED_COPY.

Example 6-15 Splitting a volume


IBM_Storwize:ITSO:superuser>splitvdiskcopy -copy 1 -iogrp 0 -name SPLIT_VOL
VOLUME_WITH_MIRRORED_COPY
Virtual Disk, id [1], successfully created

As you can see in Example 6-16, the new volume is created as an independent volume.

Example 6-16 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk SPLIT_VOL
id 1
name SPLIT_VOL
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Pool1
capacity 10.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id

352 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
FC_name
RC_id
RC_name
vdisk_UID 6005076400F580049800000000000012
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
File system
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
owner_type none
owner_id
owner_name
encrypt yes
volume_id 1
volume_name SPLIT_VOL
function
throttle_id
throttle_name
IOPs_limit
bandwidth_limit_MB
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize

Chapter 6. Volumes 353


se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB

6.6.7 Modifying a volume


Running the chvdisk command modifies a single property of a volume. Only one property can
be modified at a time. Therefore, changing the volume name and modifying its I/O group
requires two invocations of the command.

Tips: Changing the I/O group with which this volume is associated requires a flush of the
cache within the nodes in the current I/O group to ensure that all data is written to disk. I/O
must be suspended at the host level before you perform this operation.

If the volume has a mapping to any hosts, it is impossible to move the volume to an I/O
group that does not include any of those hosts.

This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O group.

If the -force parameter is used and the system cannot destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.

If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.

6.6.8 Deleting a volume


To delete a volume, run the rmvdisk command. When this command is run on a managed
mode volume, any data on the volume is lost, and the extents that made up this volume are
returned to the pool of free extents in the storage pool.

If any RC, IBM FlashCopy, or host mappings still exist for the target of rmvdisk command, the
delete fails unless the -force flag is specified. This flag causes the deletion of the volume and
any volume to host mappings and copy mappings.

If the volume is being migrated to image mode, the delete fails unless the -force flag is
specified. Using the -force flag halts the migration and then deletes the volume.

354 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If the command succeeds (without the -force flag) for an image mode volume, the write
cache data is flushed to the storage before the volume is removed. Therefore, the underlying
LU is consistent with the disk state from the point of view of the host that uses the image
mode volume (crash-consistent file system). If the -force flag is used, consistency is not
ensured, that is, the data that the host believes to be written might not be present on the LU.

If any non-destaged data exists in the fast write cache for the target of rmvdisk command, the
deletion of the volume fails unless the -force flag is specified, in which case, any
non-destaged data in the fast write cache is deleted.

Example 6-17 shows how to run the rmvdisk command to delete a volume from your
IBM Spectrum Virtualize configuration.

Example 6-17 The rmvdisk command


IBM_2145:ITSO_CLUSTER:superuser>rmvdisk volume_A

This command deletes the volume_A volume from the IBM Spectrum Virtualize configuration.
If the volume is assigned to a host, you must use the -force flag to delete the volume, as
shown in Example 6-18.

Example 6-18 The rmvdisk -force command


IBM_2145:ITSO_CLUSTER:superuser>rmvdisk -force volume_A

6.6.9 Volume protection


To prevent active volumes or host mappings from being deleted inadvertently, the system
supports a global setting that prevents these objects from being deleted if the system detects
recent I/O activity to these objects.

To set the time interval for which the volume must be idle before it can be deleted from the
system, run the chsystem command. This setting affects the following commands:
򐂰 rmvdisk
򐂰 rmvolume
򐂰 rmvdiskcopy
򐂰 rmvdiskhostmap
򐂰 rmmdiskgrp
򐂰 rmhostiogrp
򐂰 rmhost
򐂰 rmhostport

These commands fail unless the volume was idle for the specified interval or the -force
parameter was used.

To enable volume protection by setting the required inactivity interval, run the following
command:
svctask chsystem -vdiskprotectionenabled yes -vdiskprotectiontime 60

The -vdiskprotectionenabled yes parameter enables volume protection and the


-vdiskprotectiontime parameter specifies for how long a volume must be inactive (in
minutes) before it can be deleted. In this example, volumes can be deleted only if they were
inactive for over 60 minutes.

Chapter 6. Volumes 355


To disable volume protection, run the following command:
svctask chsystem -vdiskprotectionenabled no

6.6.10 Expanding a volume


Expanding a volume presents a larger capacity disk to your OS. Although this expansion can
be easily performed by using IBM Spectrum Virtualize, you must ensure that your OS
supports expansion before this function is used.

Assuming that your OS supports expansion, you can run the expandvdisksize command to
increase the capacity of a volume, as shown in Example 6-19.

Example 6-19 The expandvdisksize command


IBM_2145:ITSO_CLUSTER:superuser>expandvdisksize -size 5 -unit gb volume_C

This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it
a total size of 40 GB.

To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 6-20. This command changes the real size of the volume_B volume to a real capacity
of 55 GB. The capacity of the volume is unchanged.

Example 6-20 The lsvdisk command


IBM_Storwize:ITSO:superuser>lsvdisk volume_B
id 26
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes

IBM_Storwize:ITSO:superuser>expandvdisksize -rsize 5 -unit gb volume_B


IBM_Storwize:ITSO:superuser>lsvdisk volume_B
id 26
name volume_B
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 55.02GB
free_capacity 55.02GB
overallocation 181
autoexpand on
warning 80

356 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
grainsize 32
se_copy yes

Important: If a volume is expanded, its type becomes striped, even if it was previously
sequential or in image mode.

If not enough extents are available to expand your volume to the specified size, the
following error message is displayed:
CMMVC5860E The action failed because there were not enough extents in the
storage pool.

6.6.11 HyperSwap volume modification with CLI


The following new CLI commands for administering volumes were released in IBM Spectrum
Virtualize V7.6. However, the GUI uses the new commands only for HyperSwap volume
creation (mkvolume) and deletion (rmvolume):
򐂰 mkvolume
򐂰 mkimagevolume
򐂰 addvolumecopy*
򐂰 rmvolumecopy*
򐂰 rmvolume

In addition, the lsvdisk output shows more fields: volume_id, volume_name, and function,
which help to identify the individual VDisks that make up a HyperSwap volume. This
information is used by the GUI to provide views that reflect the client’s view of the HyperSwap
volume and its site-dependent copies, as opposed to the “low-level” VDisks and VDisk
Change Volumes.

The following individual commands are related to HyperSwap:


򐂰 mkvolume
Creates an empty volume by using storage from a storage pool. The type of volume that is
created is determined by the system topology and the number of storage pools that is
specified. The volume is always formatted (zeroed). The mkvolume command can be used
to create the following objects:
– Basic volume: Any topology
– Mirrored volume: Standard topology
– Stretched volume: Stretched topology
– HyperSwap volume: HyperSwap topology
򐂰 rmvolume
Removes a volume. For a HyperSwap volume, this process includes deleting the
active-active relationship and the change volumes.
The -force parameter that is used by rmvdisk is replaced by a set of override parameters,
one for each operation-stopping condition, which makes it clearer to the user exactly what
protection they are bypassing.
򐂰 mkimagevolume
Creates an image mode volume. This command can be used to import a volume, which
preserves data. It can be implemented as a separate command to provide greater
differentiation between the action of creating an empty volume and creating a volume by
importing data on an MDisk.

Chapter 6. Volumes 357


򐂰 addvolumecopy
Adds a copy to a volume. The new copy is always synchronized from the existing copy. For
stretched and HyperSwap topology systems, this command creates a HA volume. This
command can be used to create the following volume types:
– Mirrored volume: Standard topology
– Stretched volume: Stretched topology
– HyperSwap volume: HyperSwap topology
򐂰 rmvolumecopy
Removes a copy of a volume. This command leaves the volume intact. It also converts a
Mirrored, Stretched, or HyperSwap volume to a basic volume. For a HyperSwap volume,
this command includes deleting the active-active relationship and the change volumes.
This command enables a copy to be identified by its site.
The -force parameter that is used by rmvdiskcopy is replaced by a set of override
parameters, one for each operation-stopping condition, making it clearer to the user
exactly what protection they are bypassing.

6.6.12 Mapping a volume to a host


To map a volume to a host, run the mkvdiskhostmap command. This mapping makes the
volume available to the host for I/O operations. A host can perform I/O operations only on
volumes that are mapped to it.

When the host bus adapter (HBA) on the host scans for devices that are attached to it, the
HBA discovers all of the volumes that are mapped to its FC ports and their SCSI identifiers
(SCSI LUN IDs).

For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you do not
specify a SCSI LUN ID when mapping a volume to the host, the storage system automatically
assigns the next available SCSI LUN ID, based on any mappings that exist with that host.

Example 6-21 shows how to map volumes volume_B and volume_C to the defined host
Almaden by running the mkvdiskhostmap command.

Example 6-21 The mkvdiskhostmap command


IBM_Storwize:ITSO:superuser>mkvdiskhostmap -host Almaden volume_B
Virtual Disk to Host map, id [0], successfully created
IBM_Storwize:ITSO:superuser>mkvdiskhostmap -host Almaden volume_C
Virtual Disk to Host map, id [1], successfully created

Example 6-22 shows the output of the lshostvdiskmap command, which shows that the
volumes are mapped to the host.

Example 6-22 The lshostvdiskmap -delim command


IBM_2145:ITSO_CLUSTER:superuser>lshostvdiskmap -delim :
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID
2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020
2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021

358 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Assigning a specific LUN ID to a volume: The optional -scsi scsi_lun_id parameter
can help assign a specific LUN ID to a volume that is to be associated with a host. The
default (if nothing is specified) is to assign the next available ID based on the current
volume that is mapped to the host.

Certain HBA device drivers stop when they find a gap in the sequence of SCSI LUN IDs, as
shown in the following examples:
򐂰 Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
򐂰 Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
򐂰 Volume 3 is mapped to Host 1 with SCSI LUN ID 4.

When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.

Important: Ensure that the SCSI LUN ID allocation is contiguous.

If you are using host clusters, run the mkvolumehostclustermap command to map a volume to
a host cluster instead (see Example 6-23).

Example 6-23 The mkvolumehostclustermap command


BM_Storwize:ITSO:superuser>mkvolumehostclustermap -hostcluster vmware_cluster
UNCOMPRESSED_VOL
Volume to Host Cluster map, id [0], successfully created

6.6.13 Listing volumes that are mapped to the host


To show the volumes that are mapped to the specific host, run the lshostvdiskmap command,
as shown in Example 6-24.

Example 6-24 The lshostvdiskmap command


IBM_2145:ITSO_CLUSTER:superuser>lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C

In the output of the command, you can see that only one volume (volume_A) is mapped to the
host Siam. The volume is mapped with SCSI LUN ID 0.

If no host name is specified by the lshostvdiskmap command, it returns all defined


host-to-volume mappings.

Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, you must specify this flag before the host name in this
case. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.

Chapter 6. Volumes 359


You can also run the lshostclustervolumemap command to show the volumes that are
mapped to a specific host cluster, as shown in Example 6-25.

Example 6-25 The lshostclustervolumemap command


IBM_Storwize:ITSO:superuser>lshostclustervolumemap
id name SCSI_id volume_id volume_name volume_UID
IO_group_id IO_group_name
0 vmware_cluster 0 9 UNCOMPRESSED_VOL 6005076400F580049800000000000011 0
io_grp0

6.6.14 Listing hosts that are mapped to the volume


To identify the hosts to which a specific volume was mapped, run the lsvdiskhostmap
command, as shown in Example 6-26.

Example 6-26 The lsvdiskhostmap command


IBM_2145:ITSO_CLUSTER:superuser>lsvdiskhostmap -delim , volume_B
id,name,SCSI_id,host_id,host_name,vdisk_UID
26,volume_B,0,2,Almaden,6005076801AF813F1000000000000020

This command shows the list of hosts to which the volume volume_B is mapped.

Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.

6.6.15 Deleting a volume to host mapping


Deleting a volume mapping does not affect the volume. Instead, it removes only the host’s
ability to use the volume. To unmap a volume from a host, run the rmvdiskhostmap command,
as shown in Example 6-27.

Example 6-27 The rmvdiskhostmap command


IBM_2145:ITSO_CLUSTER:superuser>rmvdiskhostmap -host Tiger volume_D

This command unmaps the volume that is called volume_D from the host that is called Tiger.

You can also run the rmvolumehostclustermap command to delete a volume mapping from a
host cluster, as shown in Example 6-28.

Example 6-28 The rmvolumehostclustermap command


IBM_Storwize:ITSO:superuser>rmvolumehostclustermap -hostcluster vmware_cluster
UNCOMPRESSED_VOL

This command unmaps the volume that is called UNCOMPRESSED_VOL from the host cluster that
is called vmware_cluster.

Note: Removing a volume that is mapped to the host makes the volume unavailable for I/O
operations. Ensure that the host is prepared for this situation before removing a volume
mapping.

360 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6.6.16 Migrating a volume
You might want to migrate volumes from one set of MDisks to another set of MDisks to
decommission an old disk subsystem to better distribute load across your virtualized
environment, or to migrate data into the IBM Spectrum Virtualize environment by using image
mode. For more information about migration, see Chapter 8, “Storage migration” on
page 443.

Important: After migration is started, it continues until it completes unless it is stopped or


suspended by an error condition or the volume that is being migrated is deleted.

As you can see from the parameters that are shown in Example 6-29, before you can migrate
your volume, you must determine the name of the volume that you want to migrate and the
name of the storage pool to which you want to migrate it. To list the names of volumes and
storage pools, run the lsvdisk and lsmdiskgrp commands.

The command that is shown in Example 6-29 moves volume_C to the storage pool that is
named STGPool_DS5000-1.

Example 6-29 The migratevdisk command


IBM_2145:ITSO_CLUSTER:superuser>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk volume_C

Note: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.

You can use the optional threads parameter to control priority of the migration process. The
default is 4, which is the highest priority setting. However, if you want the process to take a
lower priority over other types of I/O, you can specify 3, 2, or 1.

You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 6-30.

Example 6-30 The lsmigrate command


IBM_2145:ITSO_CLUSTER:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0

IBM_2145:ITSO_CLUSTER:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0

Progress: The progress is shown in terms of percentage complete. If no output is


displayed when running the command, all volume migrations are finished.

Chapter 6. Volumes 361


6.6.17 Migrating a fully managed volume to an image mode volume
Migrating a fully managed volume to an image mode volume enables the IBM Spectrum
Virtualize system to be removed from the data path. This feature might be useful when the
IBM Spectrum Virtualize system is used as a data mover.

To migrate a fully managed volume to an image mode volume, the following rules apply:
򐂰 Cloud snapshots must not be enabled on the source volume.
򐂰 The destination MDisk must be greater than or equal to the size of the volume.
򐂰 The MDisk that is specified as the target must be in an unmanaged state.
򐂰 Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
򐂰 If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.

Example 6-31 shows running the migratetoimage command to migrate the data from
volume_A onto mdisk10, and to put the MDisk mdisk10 into the STGPool_IMAGE storage pool.

Example 6-31 The migratetoimage command


IBM_2145:ITSO_CLUSTER:superuser>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp
STGPool_IMAGE

6.6.18 Shrinking a volume


The shrinkvdisksize command reduces the capacity that is allocated to the particular
volume by the specified amount. You cannot shrink the real size of a thin-provisioned volume
to less than its used size. All capacities (including changes) must be in multiples of 512 bytes.
An entire extent is reserved even if it is only partially used. The default capacity unit is MB.

You can use this command to shrink the physical capacity of a volume or to reduce the virtual
capacity of a thin-provisioned volume without altering the physical capacity that is assigned to
the volume. To change the volume size, use the following parameters:
򐂰 For a standard-provisioned volume, use the -size parameter.
򐂰 For a thin-provisioned volume’s real capacity, use the -rsize parameter.
򐂰 For a thin-provisioned volume’s virtual capacity, use the -size parameter.

When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled.

If the volume contains data that is being used, do not shrink the volume without backing up
the data first. The system reduces the capacity of the volume by removing arbitrarily chosen
extents, or extents from those sets that are allocated to the volume. You cannot control which
extents are removed. Therefore, you cannot assume that it is unused space that is removed.

Image mode volumes cannot be reduced in size. To reduce their size, first they must be
migrated to fully managed mode.

Before the shrinkvdisksize command is used on a mirrored volume, all copies of the volume
must be synchronized.

362 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Important: Consider the following guidelines when you are shrinking a disk:
򐂰 If the volume contains data or host-accessible metadata (for example, an empty
physical volume of an LVM), do not shrink the disk.
򐂰 This command can shrink a FlashCopy target volume to the same capacity as the
source.
򐂰 Before you shrink a volume, validate that the volume is not mapped to any host objects.
򐂰 You can determine the exact capacity of the source or master volume by running the
svcinfo lsvdisk -bytes vdiskname command.

Shrink the volume by the required amount by running the following command:
shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name |
vdisk_id.

Example 6-32 shows running the shrinkvdisksize command to reduce the size of volume
volume_D from a total size of 80 GB by 44 GB to the new total size of 36 GB.

Example 6-32 The shrinkvdisksize command


IBM_2145:ITSO_CLUSTER:superuser>shrinkvdisksize -size 44 -unit gb volume_D

6.6.19 Listing volumes that use MDisks


To identify which volumes use space on the specified MDisk, run the lsmdiskmember
command. Example 6-33 displays a list of volume IDs of all volume copies that use mdisk8. To
correlate the IDs that are displayed in this output to volume names, run the lsvdisk
command.

Example 6-33 The lsmdiskmember command


IBM_2145:ITSO_CLUSTER:superuser>lsmdiskmember mdisk8
id copy_id
24 0
27 0

6.6.20 Listing MDisks that are used by the volume


To list MDisks that supply space that is used by the specified volume, run the lsvdiskmember
command. Example 6-34 lists the MDisk IDs of all MDisks that are used by the volume with
ID 0.

Example 6-34 The lsvdiskmember command


IBM_2145:ITSO_CLUSTER:superuser>lsvdiskmember 0
id
4
5
6
7

If you want to know more about these MDisks, you can run the lsmdisk command and provide
the MDisk ID that is listed in the output of the lsvdiskmember command as a parameter.

Chapter 6. Volumes 363


6.6.21 Listing volumes that are defined in the storage pool
To list volumes that are defined in the specified storage pool, run the lsvdisk -filtervalue
command. Example 6-35 shows how to use the lsvdisk -filtervalue command to list all
volumes that are defined in the storage pool that is named Pool0.

Example 6-35 The lsvdisk -filtervalue command: Volumes in the pool


IBM_Storwize:ITSO:superuser>lsvdisk -filtervalue mdisk_grp_name=Pool0 -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge,compressed_copy_count,parent_mdisk_grp_id,parent_mdisk_grp_name,formatting,encrypt,volu
me_id,volume_name,function
0,A_MIRRORED_VOL_1,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
00002,0,1,empty,0,no,0,0,Pool0,no,yes,0,A_MIRRORED_VOL_1,
2,VOLUME_WITH_MIRRORED_COPY,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498
00000000000004,0,1,empty,0,no,0,0,Pool0,no,yes,2,VOLUME_WITH_MIRRORED_COPY,
3,THIN_PROVISION_VOL_1,0,io_grp0,online,0,Pool0,100.00GB,striped,,,,,6005076400F58004980000
0000000005,0,1,empty,1,no,0,0,Pool0,no,yes,3,THIN_PROVISION_VOL_1,
6,MIRRORED_SYNC_RATE_16,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004980000
0000000008,0,1,empty,0,no,0,0,Pool0,no,yes,6,MIRRORED_SYNC_RATE_16,
7,THIN_PROVISION_MIRRORED_VOL,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004
9800000000000009,0,1,empty,1,no,0,0,Pool0,no,yes,7,THIN_PROVISION_MIRRORED_VOL,
8,Tiger,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F580049800000000000010,0,1,e
mpty,0,no,0,0,Pool0,no,yes,8,Tiger,
9,UNCOMPRESSED_VOL,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
00011,0,1,empty,0,no,1,0,Pool0,no,yes,9,UNCOMPRESSED_VOL,
12,vdisk0_restore,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F58004980000000000
000E,0,1,empty,0,no,0,0,Pool0,no,yes,12,vdisk0_restore,
13,vdisk0_restore1,0,io_grp0,online,0,Pool0,10.00GB,striped,,,,,6005076400F5800498000000000
0000F,0,1,empty,0,no,0,0,Pool0,no,yes,13,vdisk0_restore1,

6.6.22 Listing storage pools in which a volume has its extents


To show to which storage pool a specific volume belongs, run the lsvdisk command, as
shown in Example 6-36.

Example 6-36 The lsvdisk command: Storage pool ID and name


IBM_Storwize:ITSO:superuser>lsvdisk 0
id 0
name A_MIRRORED_VOL_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 10.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076400F580049800000000000002

364 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 4660
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
File system
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
owner_type none
owner_id
owner_name
encrypt yes
volume_id 0
volume_name A_MIRRORED_VOL_1
function
throttle_id 1
throttle_name throttle1
IOPs_limit 233
bandwidth_limit_MB 122
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
tier tier0_flash

Chapter 6. Volumes 365


tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction0.00MB

To learn more about these storage pools, run the lsmdiskgrp command as described in
Chapter 5, “Storage pools” on page 221.

6.6.23 Tracing a volume from a host back to its physical disks


In some cases, you might need to verify exactly which physical disks are used to store the
data of a volume. This information is not directly available to the host, but it might be obtained
by using a sequence of queries.

Before you trace a volume, you must unequivocally map a logical device that is seen by the
host to a volume that is presented by the storage system. The best volume characteristic for
this purpose is the volume ID. This ID is available to the OS in the Vendor Specified Identifier
field of page 0x80 or 0x83 (vital product data (VPD)), which the storage device sends in
response to SCSI INQUIRY command from the host.

In practice, the ID can be obtained from the multipath driver in the OS. After you know the
volume ID, you can use it to identify the physical location of data.

Note: For sequential and image mode volumes, a volume copy is mapped to exactly one
MDisk. This configuration usually is not used for striped volumes unless the volume size is
lesser than the extent sizes. Therefore, a single striped volume uses multiple MDisks in a
typical case.

On hosts that run IBM System Storage Multipath Subsystem Device Driver (SDD), you can
obtain the volume ID from the output of the datapath query device command. You see a long
disk serial number for each vpath device, as shown in Example 6-37.

Example 6-37 The datapath query device command


DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000005
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0
1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000004
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0

366 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000006
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0
1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

State: In Example 6-38, the state of each path is OPEN. Sometimes, the state is CLOSED.
This state does not necessarily indicate a problem because it might be a result of the
path’s processing stage.

On a Linux host running a native multipath driver, you can use the output of command
multipath -ll to find the volume ID, as shown in Example 6-38.

Example 6-38 Volume ID returned by the multipath -ll command


mpath1 (360050768018301BF2800000000000004) IBM,2145
[size=2.0G][features=0][hwhandler=0]
\_ round-robin 0 [prio=200][ enabled]
\_ [Link] sdd 8:48 [active][ready]
\_ [Link] sdt 65:48 [active][ready]
\_ round-robin 0 [prio=40][ active]
\_ [Link] sdak 66:64 [active][ready]
\_ [Link] sdal 66:80 [active][ready]

Note: the volume ID that is shown in the output of multipath -ll is generated by the Linux
scsi_id. For systems that provide the VPD by using page 0x83 (such as IBM Spectrum
Virtualize devices), the ID that is obtained from the VPD page is prefixed by the number 3,
which is the Network Address Authority (NAA) type identifier. Therefore, the volume NAA
identifier (that is, the volume ID that is obtained by running the SCSI INQUIRY command)
starts at the second displayed digit. In Example 6-38, the volume ID starts with digit 6.

After you know the volume ID, complete the following steps:
1. To list volumes that are mapped to the host, run the lshostvdiskmap command.
Example 6-39 lists the volumes that are mapped to host Almaden.

Example 6-39 The lshostvdiskmap command


IBM_2145:ITSO_CLUSTER:superuser>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004
2,Almaden,2,28,volume_C,60050768018301BF2800000000000006

Look for the VDisk unique identifier (UID) that matches volume UID that was identified and
note the volume name (or ID) for a volume with this UID.
2. To list the MDisks that contain extents that are allocated to the specified volume, run the
lsvdiskmember vdiskname command, as shown in Example 6-40.

Example 6-40 The lsvdiskmember command


IBM_2145:ITSO_CLUSTER:superuser>lsvdiskmember volume_A
id
0

Chapter 6. Volumes 367


1
2
3
4
10
11
13
15
16
17

3. For each of the MDisk IDs that were obtained in step 2 on page 367, run the lsmdisk
mdiskID command to discover the MDisk controller and LUN information. Example 6-41
shows the output for mdisk0. The output displays the back-end storage controller name
and the controller LUN ID to help you to track back to a LUN within the disk subsystem.

Example 6-41 The lsmdisk command


IBM_2145:ITSO_CLUSTER:superuser>lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd

You can identify the back-end storage that is presenting the LUN by using the value of the
controller_name field that was returned for the MDisk.

On the back-end storage, you can identify which physical disks make up the LUN that was
presented to the Storage Virtualize system by using the volume ID that is displayed in the UID
field.

368 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7

Chapter 7. Hosts
This chapter describes the host configuration procedures that are required to attach
supported hosts to the systems, and it documents new improvements on the systems when
hosts are attached by using Non-Volatile Memory Express (NVMe) over Fabric (NVMe-oF).
Finally, this chapter explains host clustering and N_Port ID Virtualization (NPIV) support from
a host’s perspective.

This chapter includes the following topics:


򐂰 7.1, “Host attachment overview” on page 370
򐂰 7.2, “Host clusters” on page 371
򐂰 7.3, “NVMe over Fibre Channel” on page 371
򐂰 7.4, “N_Port ID Virtualization support” on page 372
򐂰 7.5, “Hosts operations by using the GUI” on page 381
򐂰 7.6, “Performing hosts operations by using the command-line interface” on page 430

© Copyright IBM Corp. 2020. All rights reserved. 369


7.1 Host attachment overview
The storage systems support many host types (from both IBM and other vendors). This
feature makes it possible to consolidate storage in an open systems environment into a
common pool of storage. Then, you can use and manage the storage pool more efficiently as
a single entity from a central point on the storage area network (SAN).

The ability to consolidate storage for attached open systems hosts provides the following
benefits:
򐂰 Easier storage management.
򐂰 Increased utilization rate of the installed storage capacity.
򐂰 Advanced Copy Services functions offered across storage systems from separate
vendors.
򐂰 Only one multipath driver is required for attached hosts.

Hosts can be connected to the systems by using any of the following protocols:
򐂰 Fibre Channel (FC)
򐂰 Fibre Channel over Ethernet (FCoE)
򐂰 Internet Small Computer Systems Interface (iSCSI)
򐂰 iSCSI Extensions for Remote Direct Memory Access (RDMA) (iSER)
򐂰 NVMe

Hosts that connect to the systems by using fabric switches that use the FC or FCoE protocol
must be zoned correctly, as described in Chapter 2, “Planning” on page 77.

Hosts that connect to the systems by using the iSCSI protocol must be configured correctly,
as described in Chapter 2, “Planning” on page 77.

Note: Certain host operating systems (OSs) can be directly connected to the
IBM FlashSystem system without needing FC fabric switches. For more information, see
the IBM System Storage Interoperation Center (SSIC).

For load balancing and access redundancy on the host side, using a host multipathing driver
is required in the following situations:
򐂰 Protection from fabric link failures, including port failures on the IBM Spectrum Virtualize
system nodes
򐂰 Protection from a host bus adapter (HBA) failure (if two HBAs are in use)
򐂰 Protection from fabric failures if the host is connected through two HBAs to two separate
fabrics
򐂰 Providing load balancing across the host HBAs

For more information about various host OSs and versions that are supported by an
IBM FlashSystem system, see the SSIC.

For more information about how to attach various supported host OSs to the systems, see the
“Host Attachment” section of IBM Knowledge Center.

370 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If your host OS is not in the SSIC, you can ask your IBM System Services Representative
(IBM SSR) to submit a special request for support by contacting your IBM Business Partner,
account manager, or IBM Support.

7.2 Host clusters


The software supports host clusters starting with Version 7.7.1 and later. By using a host
cluster, you can create a group of hosts to form a cluster. A cluster is treated as a single entity,
which allows multiple hosts to access the same volumes.

Volumes that are mapped to a host cluster are assigned to all members of the host cluster
with the same Small Computer System Interface (SCSI) ID.

A typical user case is to define a host cluster that contains all of the worldwide port names
(WWPNs) that belong to the hosts that participate in a host OS-based cluster, such as
IBM PowerHA®, and Microsoft Cluster Server.

The following commands deal with host clusters:


򐂰 lshostcluster
򐂰 lshostclustermember
򐂰 lshostclustervolumemap
򐂰 mkhost (modified to put a host in a host cluster on creation)
򐂰 rmhostclustermember
򐂰 rmhostcluster
򐂰 rmvolumehostclustermap
򐂰 chhostcluster

7.3 NVMe over Fibre Channel


While running Version 8.2 and later, IBM FlashSystem 9100 and 7200 systems can attach
NVMe hosts by using NVMe over Fibre Channel (FC-NVMe). FC-NVMe uses the Fibre
Channel Protocol (FCP) as its underlying transport, which puts the data transfer in control of
the target and transfers data direct from host memory like RDMA. In addition, with FC-NVMe
a host can send commands and data together (first burst), which eliminates the first data that
is “read” by the target and provides better performance at distances. With Version [Link], the
IBM FlashSystem 5100 also provides full support for NVMe over Fabric (NVMe-oF), along
with many feature improvements on all systems.

The limitation in Version 8.2 of being able to attach only one of SCSI or NVMe per I/O
GROUP was removed in Version 8.3. You can now run SCSI and NVMe in parallel. The limit
of 512 host objects per I/O group remains in place, as mentioned in 2.12.4, “Planning for large
deployments” on page 97. When you run SCSI and NVMe in parallel, there are limits for each
protocol, as shown in Table 7-1.

Table 7-1 Defined host object limits per I/O group for IBM FlashSystem 9100 and 7200 systems
SCSI host objects NVMe host objects Total host objects

496 16 512a
a. IBM Storwize V5100 systems have a maximum of 256 hosts.

Chapter 7. Hosts 371


Note: Although the specifications that are shown in Table 7-1 on page 371 are the
maximum amount of each type of host attachment you can have, if you do not have the
maximum NVMe Host Objects defined, then these objects do not reduce the number of
total host objects that you can have. For example, if you had 10 NVMe host objects that are
defined, you might have up to 502 SCSI host objects defined. The only hard limit is 16
NVMe host objects per I/O group. You should be diligent when planning a parallel SCSI
and NVMe deployment as described in Chapter 2, “Planning” on page 77 because it can
be resource-intensive, especially with large deployments (many hosts). This situation
occurs because NVMe is more sensitive to delays than SCSI. It is a best practice to check
the configuration Limits page for your product and specifically the “NVMe over Fibre
Channel Host Properties” section.

Ensure that you select the correct product.

Although the protocol attachment limit was removed, a volume can be mapped to a host only
by using one protocol (to avoid any potential interoperability problems). IBM FlashCopy,
Volume Mirroring, Remote Copy (RC), and Data Reduction Pools (DRPs) are all supported by
NVMe-oF. New to Version 8.3.1 is support for stretched cluster configurations.

Note: There is no support for IBM HyperSwap or Non-Disruptive Volume Move (NDVM)
because NVMe does not enable accessing the same volume from more than one NVMe
subsystem. In Version 8.2, each I/O group was its own NVMe subsystem, so it was not
possible to use NDVM or HyperSwap. However, in Version [Link], the entire cluster is a
single NVMe subsystem.

For more information about NVMe, see IBM Storage and the NVM Express Revolution,
REDP-5437 and Chapter 2, “Planning” on page 77.

7.4 N_Port ID Virtualization support


The usage model for IBM FlashSystem 5100, IBM FlashSystem 7200, and IBM FlashSystem
9100 is based on a two-way active/active node model. This model has a pair of distinct control
modules that share active/active access for any specific volume. These nodes each have their
own FC worldwide node name (WWNN). Therefore, ports that are presented from each node
have a set of WWPNs that are presented to the fabric.

Traditionally, if one node fails or is removed for some reason, the paths that are presented for
volumes from that node go offline. In this case, it is up to the native O/S multipathing software
to fail over from using both sets of WWPN to only those nodes that remain online. Although
this process is what multipathing software is designed to do, occasionally it can be
problematic, particularly if paths are not seen as coming back online for some reason.

Starting with Version 7.7, the systems can be enabled into NPIV mode. When NPIV mode is
enabled on the systems, ports do not come online until they are ready to service I/O, which
improves host behavior around node unpends. In addition, path failures because of an offline
node are masked from hosts, and the multipathing driver does not need to perform any path
recovery.

When NPIV is enabled on the nodes, each physical WWPN reports up to four virtual WWPNs,
as listed in Table 7-2 on page 373.

372 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Table 7-2 IBM Spectrum Virtualize NPIV ports
NPIV port Port description

Primary port The WWPN that communicates with back-end storage. It can be used
for node to node traffic (local or remote).

Primary SCSI host attach The WWPN that communicates with hosts. It is a target port only. It is
port the primary port, so it is based on this local node’s WWNN.

Failover SCSI host attach A standby WWPN that communicates with hosts that is brought online
port only if the partner node within the I/O group goes offline. This WWPN
is the same as the primary host attach WWPN of the partner node.

Primary NVMe host attach The WWPN that communicates with hosts. It is a target port only. This
port WWPN is the primary port, so it is based on this local node’s WWNN.

Failover NVMe host attach A standby WWPN that communicates with hosts that is brought online
port only if the partner node within the I/O group goes offline. This WWPN
is the same as the primary host attach WWPN of the partner node.

Figure 7-1 shows the WWPNs that are associated with a port when NPIV is enabled.

Figure 7-1 Allocation of NPIV virtual WWPN ports per physical port

Chapter 7. Hosts 373


The failover host attach port is not active. Figure 7-2 shows what happens when the partner
node fails. After the node failure, the failover host attach ports on the remaining node become
active and take on the WWPN of the failed node’s primary host attach port.

Note: Figure 7-2 shows only two ports per node in detail, but the same situation applies for
all physical ports. The effect is the same for NVMe ports because they use the same NPIV
structure, but with the NVMe topology instead of regular SCSI.

Figure 7-2 Allocation of NPIV virtual WWPN ports per physical port after a node failure

With Version 7.7 onwards, this process happens automatically when NPIV is enabled at a
system level in the systems. This failover happens only between the two nodes in the same
I/O group.

A transitional mode enables migration of hosts from previous non-NPIV enabled systems to
enabled NPIV system, enabling a transition period as hosts are rezoned to the primary host
attach WWPNs.

The process for enabling NPIV on a new system is slightly different than on an existing
system. For more information, see IBM Knowledge Center.

Note: NPIV is supported for FCP only. It is not supported for FCoE or iSCSI protocols.

7.4.1 NPIV prerequisites


Consider the following key points for NPIV enablement:
򐂰 The system must be running Version 7.7 or later.
򐂰 A Version 7.7 or later system with NPIV enabled as back-end storage for a system that is
earlier than Version 7.7 is not supported.

374 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Both nodes in an I/O group should have identical hardware to enable failover to work as
expected.
򐂰 The FC switches to which the system ports are attached must support NPIV and have this
feature enabled.

7.4.2 Enabling NPIV on a new system


New systems that are shipped with Version 7.7 or later should have NPIV enabled by default.
If your new system does not have NPIV enabled, it can be enabled by completing the
following steps:
1. Run the lsiogrp command to list the I/O groups present in a system, as shown in
Example 7-1.

Example 7-1 Listing the I/O groups in a system


IBM_Storwize:ITSO-V7000:superuser>lsiogrp
id name node_count vdisk_count host_count site_id site_name
0 io_grp0 2 2 1
1 io_grp1 0 0 1
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0

2. Run the lsiogrp <id> | grep fctargetportmode command for the specific I/O group ID to
display the fctargetportmode setting. If this setting is enabled, as shown in Example 7-2,
NPIV host target port mode is enabled.

Example 7-2 Checking the NPIV mode with the fctargetportmode field
IBM_Storwize:ITSO-V7000:superuser>lsiogrp 0 | grep fctargetportmode
fctargetportmode enabled
IBM_Storwize:ITSO-V7000:superuser>

3. The virtual WWPNs can be listed by running the lstargetportfc command, as shown in
Example 7-3. The host_io_permitted and virtualized columns should be yes, meaning
the WWPN in those lines is a primary host attach port and should be used when zoning
the hosts to the system.

Example 7-3 Listing the virtual WWPNs


IBM_Storwize:ITSO-V7000:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted virtualized
protocol
1 500507680140A288 500507680100A288 1 1 1 010A00 no no scsi
2 500507680142A288 500507680100A288 1 1 1 010A02 yes yes scsi
3 500507680144A288 500507680100A288 1 1 1 010A01 yes yes nvme
4 500507680130A288 500507680100A288 2 1 1 010400 no no scsi
5 500507680132A288 500507680100A288 2 1 1 010401 yes yes scsi
6 500507680134A288 500507680100A288 2 1 1 010402 yes yes nvme
7 500507680110A288 500507680100A288 3 1 1 010500 no no scsi
8 500507680112A288 500507680100A288 3 1 1 010501 yes yes scsi
9 500507680114A288 500507680100A288 3 1 1 010502 yes yes nvme
10 500507680120A288 500507680100A288 4 1 1 010A00 no no scsi
11 500507680122A288 500507680100A288 4 1 1 010A02 yes yes scsi
12 500507680124A288 500507680100A288 4 1 1 010A01 yes yes nvme
49 500507680C110009 500507680C000009 1 2 2 010500 no no scsi
50 500507680C150009 500507680C000009 1 2 2 010502 yes yes scsi
51 500507680C190009 500507680C000009 1 2 2 010501 yes yes nvme
52 500507680C120009 500507680C000009 2 2 2 010400 no no scsi

Chapter 7. Hosts 375


53 500507680C160009 500507680C000009 2 2 2 010401 yes yes scsi
54 500507680C1A0009 500507680C000009 2 2 2 010402 yes yes nvme
55 500507680C130009 500507680C000009 3 2 2 010900 no no scsi
56 500507680C170009 500507680C000009 3 2 2 010902 yes yes scsi
57 500507680C1B0009 500507680C000009 3 2 2 010901 yes yes nvme
58 500507680C140009 500507680C000009 4 2 2 010900 no no scsi
59 500507680C180009 500507680C000009 4 2 2 010901 yes yes scsi
60 500507680C1C0009 500507680C000009 4 2 2 010902 yes yes nvme
IBM_Storwize:ITSO-V7000:superuser>

4. NPIV enablement can be verified by checking the fctargetportmode field, as shown in


Example 7-4.

Example 7-4 NPIV enablement verification


IBM_Storwize:ITSO-V7000:superuser>lsiogrp 0
id 0
name io_grp0
node_count 2
vdisk_count 2
host_count 2
flash_copy_total_memory 20.0MB
flash_copy_free_memory 20.0MB
remote_copy_total_memory 20.0MB
remote_copy_free_memory 20.0MB
mirroring_total_memory 20.0MB
mirroring_free_memory 20.0MB
raid_total_memory 40.0MB
raid_free_memory 38.8MB
maintenance no
compression_active yes
accessible_vdisk_count 2
compression_supported yes
max_enclosures 10
encryption_supported no
flash_copy_maximum_memory 552.0MB
site_id
site_name
fctargetportmode enabled
compression_total_memory 2047.9MB

You can now configure your zones for hosts by using the primary host attach ports (virtual
WWPNs) of the system ports, as shown in bold in the output of Example 7-3 on page 375.

7.4.3 Enabling NPIV on an existing system


When IBM Spectrum Virtualize systems that are running code earlier than Version 7.7.1 are
upgraded to Version 7.7.1 or later, the NPIV feature is not turned on by default because it
might require changes to host-side zoning.

Enabling NPIV on a system requires that you complete the following steps after you meet the
prerequisites:
1. Audit your SAN fabric layout and zoning rules because NPIV has stricter requirements.
Ensure that equivalent ports are on the same fabric and in the same zone.
2. Check the path count between your hosts and the IBM Spectrum Virtualize system to
ensure that the number of paths is half of the usual supported maximum.

376 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For more information, see the topic about zoning considerations for NPIV in IBM
Knowledge Center.
3. Run the lstargetportfc command to discover the primary host attach WWPNs (virtual
WWPNs), as shown in bold in Example 7-5.

Example 7-5 Using the lstargetportfc command to get primary host WWPNs (virtual WWPNs)
IBM_Storwize:ITSO-V7000:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted virtualized
protocol
1 500507680140A288 500507680100A288 1 1 1 010A00 yes no scsi
2 500507680142A288 500507680100A288 1 1 000000 no yes scsi
3 500507680144A288 500507680100A288 1 1 000000 no yes nvme
4 500507680130A288 500507680100A288 2 1 1 010400 yes no scsi
5 500507680132A288 500507680100A288 2 1 000000 no yes scsi
6 500507680134A288 500507680100A288 2 1 000000 no yes nvme
7 500507680110A288 500507680100A288 3 1 1 010500 yes no scsi
8 500507680112A288 500507680100A288 3 1 000000 no yes scsi
9 500507680114A288 500507680100A288 3 1 000000 no yes nvme
10 500507680120A288 500507680100A288 4 1 1 010A00 yes no scsi
11 500507680122A288 500507680100A288 4 1 000000 no yes scsi
12 500507680124A288 500507680100A288 4 1 000000 no yes nvme
49 500507680C110009 500507680C000009 1 2 2 010500 yes no scsi
50 500507680C150009 500507680C000009 1 2 000000 no yes scsi
51 500507680C190009 500507680C000009 1 2 000000 no yes nvme
52 500507680C120009 500507680C000009 2 2 2 010400 yes no scsi
53 500507680C160009 500507680C000009 2 2 000000 no yes scsi
54 500507680C1A0009 500507680C000009 2 2 000000 no yes nvme
55 500507680C130009 500507680C000009 3 2 2 010900 yes no scsi
56 500507680C170009 500507680C000009 3 2 000000 no yes scsi
57 500507680C1B0009 500507680C000009 3 2 000000 no yes nvme
58 500507680C140009 500507680C000009 4 2 2 010900 yes no scsi
59 500507680C180009 500507680C000009 4 2 000000 no yes scsi
60 500507680C1C0009 500507680C000009 4 2 000000 no yes nvme

4. Enable transitional mode for NPIV on the system (see Example 7-6).

Example 7-6 NPIV in transitional mode


IBM_Storwize:ITSO-V7000:superuser>chiogrp -fctargetportmode transitional 0
IBM_Storwize:ITSO-V7000:superuser>lsiogrp 0 |grep fctargetportmode
fctargetportmode transitional

Alternatively, to activate NPIV in transitional mode by using the GUI, go to the GUI and
select Settings → System → I/O Groups, as shown in Figure 7-3.

Figure 7-3 I/O Groups menu

Chapter 7. Hosts 377


You can then check the current NPIV setting by viewing the NPIV column, which shows
“disabled” if NPIV is no enabled. Select the I/O group on which you want to enable NPIV
and select Actions → Change NPIV Settings, as shown in Figure 7-4.

Figure 7-4 Change NPIV Settings

After you select Continue, NPIV is enabled in Transitional Mode.


5. Ensure that the primary host attach WWPNs (virtual WWPNs) now allow host traffic, as
shown in bold in Example 7-7.

Example 7-7 Host attach WWPNs (virtual WWPNs) permitting host traffic
IBM_2145:ITSO-V7000:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted virtualized
protocol
1 500507680140A288 500507680100A288 1 1 1 010A00 yes no scsi
2 500507680142A288 500507680100A288 1 1 1 010A02 yes yes scsi
3 500507680144A288 500507680100A288 1 1 1 010A01 yes yes nvme
4 500507680130A288 500507680100A288 2 1 1 010400 yes no scsi
5 500507680132A288 500507680100A288 2 1 1 010401 yes yes scsi
6 500507680134A288 500507680100A288 2 1 1 010402 yes yes nvme
7 500507680110A288 500507680100A288 3 1 1 010500 yes no scsi
8 500507680112A288 500507680100A288 3 1 1 010501 yes yes scsi
9 500507680114A288 500507680100A288 3 1 1 010502 yes yes nvme
10 500507680120A288 500507680100A288 4 1 1 010A00 yes no scsi
11 500507680122A288 500507680100A288 4 1 1 010A02 yes yes scsi
12 500507680124A288 500507680100A288 4 1 1 010A01 yes yes nvme
49 500507680C110009 500507680C000009 1 2 2 010500 yes no scsi
50 500507680C150009 500507680C000009 1 2 2 010502 yes yes scsi
51 500507680C190009 500507680C000009 1 2 2 010501 yes yes nvme
52 500507680C120009 500507680C000009 2 2 2 010400 yes no scsi
53 500507680C160009 500507680C000009 2 2 2 010401 yes yes scsi
54 500507680C1A0009 500507680C000009 2 2 2 010402 yes yes nvme
55 500507680C130009 500507680C000009 3 2 2 010900 yes no scsi
56 500507680C170009 500507680C000009 3 2 2 010902 yes yes scsi
57 500507680C1B0009 500507680C000009 3 2 2 010901 yes yes nvme
58 500507680C140009 500507680C000009 4 2 2 010900 yes no scsi
59 500507680C180009 500507680C000009 4 2 2 010901 yes yes scsi
60 500507680C1C0009 500507680C000009 4 2 2 010902 yes yes nvme
IBM_2145:ITSO-V7000:superuser>

378 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6. Add the primary host attach ports (virtual WWPNs) to your host zones but do not remove
the IBM FlashSystem WWPNs that are in the zones. Example 7-8 shows a host zone to
the primary port WWPNs of the IBM FlashSystem nodes.

Example 7-8 Legacy host zone


zone: WINDOWS_HOST_01_IBM_ITSOV7000
[Link]
[Link]
[Link]

Example 7-9 shows that we added the primary host attach ports (virtual WWPNs) to our
example host zone so that we can change the host without disrupting its availability.

Example 7-9 Transitional host zone


zone: WINDOWS_HOST_01_IBM_ITSOV7000
[Link]
[Link]
[Link]
[Link]
[Link]

7. With transitional zoning active in your fabrics, ensure that the host is using the new NPIV
ports for host I/O. Example 7-10 shows the before and after pathing for our host. The
select count increases on the new paths and stopped on the old paths.

Example 7-10 Host device pathing: Before and after


C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050764008680083800000000000000 LUN SIZE: 20.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 3991778 0
1 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 416214 0
2 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 22255 0
3 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 372785 0
C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050764008680083800000000000000 LUN SIZE: 20.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 3991778 2
1 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 416214 1
2 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 22255 0
3 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 372785 2
4 * Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 22219 0
5 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 95109 0

Chapter 7. Hosts 379


6 * Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2 0
7 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 91838 0

Note: Consider the following points:


򐂰 You can verify that you are logged in to the NPIV ports by running the lsfabric
-host host_id_or_name command. If I/O activity is occurring, each host has at least
one line in the command output that corresponds to a host port and shows active in
the activity field:
– Hosts where no I/O occurred in the past 5 minutes show inactive for any login.
– Hosts that do not adhere to preferred paths might still be processing I/O to
primary ports.
򐂰 Depending on the host OS, rescanning of storage might be required on some hosts
to recognize more paths that are now provided by using primary host attach ports
(virtual WWPNs).

8. After all hosts are rezoned and the pathing is validated, change the system NPIV to
enabled mode by running the command that is shown in Example 7-11.

Example 7-11 Enabling the NPIV


IBM_2145:ITSO:superuser>chiogrp -fctargetportmode enabled 0

Alternatively, to enable NPIV by using the GUI, go to the I/O Groups window, as shown in
Step 4, select the I/O group, click Actions, and then click Change NPIV Settings. The
NPIV Settings window opens, as shown in Figure 7-5.

Figure 7-5 Change NPIV Settings: Enabling

After you select Continue, NPIV is enabled on the I/O group.

NPIV is enabled on the system, and you confirmed that the hosts are using the virtualized
WWPNs for I/O. To complete the NPIV implementation, you can modify the host zones to
remove the old primary attach port WWPNs. Example 7-12 on page 381 shows the final zone
with the host HBA and the IBM FlashSystem virtual WWPNs.

380 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Example 7-12 Final host zone
zone: WINDOWS_HOST_01_IBM_ITSOV7000
[Link]
[Link]
[Link]

Note: If any hosts are still configured to use the physical ports on the system, the system
prevents you from changing the fctargetportmode from transitional to enabled and
shows the following error:
CMMVC8019E Task could interrupt IO and force flag not set.

7.5 Hosts operations by using the GUI


This section describes performing the following host operations by using the IBM Spectrum
Virtualize GUI:
򐂰 Creating hosts
򐂰 Advanced host administration
򐂰 Adding and deleting host ports
򐂰 Host mappings overview

7.5.1 Creating hosts


This section describes how to create FC, iSCSI, and NVMe hosts by using the GUI. It is
assumed that hosts are prepared for attachment and that the host WWPNs, iSCSI initiator
names, or NVMe Qualified Names (NQNs) are known.

For more information, see the “Host Attachment” section of IBM Knowledge Center.

Chapter 7. Hosts 381


To create a host, complete the following steps:
1. Open the host configuration window by clicking Hosts (see Figure 7-6).

Figure 7-6 Opening the host window

2. To create a host, click Add Host. If you want to create an FC host, continue with “Creating
Fibre Channel hosts” on page 383. To create an iSCSI host, go to “Creating iSCSI hosts”
on page 387. To create an NVMe host, go to “Creating NVMe hosts” on page 392.

382 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Creating Fibre Channel hosts
To create FC hosts, complete the following steps:
1. Select Fibre Channel. The FC configuration window opens (see Figure 7-7).

Figure 7-7 Fibre Channel host configuration

Chapter 7. Hosts 383


2. Enter a host name and click the Host Port menu to get a list of all discovered WWPNs
(see Figure 7-8).

Figure 7-8 Available WWPNs

3. Select one or more WWPNs for your host. The IBM FlashSystem 9100, IBM FlashSystem
5100, or IBM FlashSystem 7200 systems should have the host port WWPNs available if
the host is prepared correctly. If they do not appear in the list, scan for new disks as
required on the respective OS and click the Rescan icon in the WWPN box. If they still do
not appear, check the SAN zoning and repeat the scanning.

Creating offline hosts: If you want to create hosts that are offline or not connected at
the moment, it is also possible to enter the WWPNs manually. Enter them into the Host
Ports field to add them to the list.

4. If you want to add more ports to your Host, click the plus sign (+) to add all ports that
belong to the specific host.

384 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. If you are creating a Hewlett-Packard UNIX (HP-UX) or Target Port Group Support (TPGS)
host, select the Host type (see Figure 7-9). Select your host type. If your specific host
type is not listed, select generic.

Figure 7-9 Host type selection

Chapter 7. Hosts 385


6. If you set up object-based access control (OBAC) as described in Chapter 11, “Ownership
groups” on page 689, then select the ownership group that you want the host to be a part
of from the Ownership Group menu, as seen in Figure 7-10.

Figure 7-10 Adding the new host to an ownership group

7. Click Add to create the host object.


8. Click Close to return to the host window.
9. Repeat these steps for all of your FC hosts. Figure 7-11 shows the All Hosts window after
creating a second host.

Figure 7-11 Hosts view after creating a host

After you add the FC hosts, to create volumes and map them to the created hosts, see
Chapter 6, “Volumes” on page 277.

386 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Creating iSCSI hosts
When creating an iSCSI-attached host, consider the following points:
򐂰 iSCSI Internet Protocol (IP) addresses can fail over to the partner node in the I/O group if
a node fails. This design reduces the need for multipathing support in the iSCSI host.
򐂰 The iSCSI Qualified Name (IQN) of the host is added to a IBM FlashSystem host object in
the same way that you add FC WWPNs.
򐂰 Host objects can have WWPNs and IQNs.
򐂰 Standard iSCSI host connection procedures can be used to discover and configure the
IBM FlashSystem systems as an iSCSI target.
򐂰 The IBM FlashSystem system supports the Challenge Handshake Authentication Protocol
(CHAP) authentication methods for iSCSI.
򐂰 The name [Link].<cluster_name>.<node_name> is the IQN for an
IBM FlashSystem node. Because the IQN contains the clustered system name and the
node name, do not change these names after iSCSI is deployed.
򐂰 Each node can be given an iSCSI alias as an alternative to the IQN.

To create iSCSI hosts, complete the following steps:


1. Click iSCSI, and the iSCSI configuration options window opens (see Figure 7-12).

Figure 7-12 Adding an iSCSI host

Chapter 7. Hosts 387


2. Enter a host name and the iSCSI initiator name into the iSCSI host IQN field. Click the
plus sign (+) if you want to add initiator names to one host.
3. If you are connecting an HP-UX or TPGS host, click the Host type menu and then select
the correct host type. For our VMware Elastic Sky X (ESX) host, we select VVOL.
However, generic can be selected if you are not using VMware vSphere Virtual Volumes
(VVOLs).
4. Click Add and then click Close to complete the host object definition.
5. Repeat these steps for every iSCSI host that you want to create. Figure 7-13 shows the All
Hosts window after creating two FC hosts and one iSCSI host.

Figure 7-13 Defined Hosts list

Although the iSCSI host is now configured, the iSCSI Ethernet ports must also be configured
to provide connectivity.

To enable iSCSI connectivity, complete the following steps:


1. Select Settings → Network and select the iSCSI tab (see Figure 7-14).

Figure 7-14 Network: iSCSI settings

In the iSCSI Configuration window, you can modify the system name, node names, and
provide an optional iSCSI Alias for each node, if wanted (see Figure 7-15 on page 389).

388 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 7-15 iSCSI Configuration window

The window shows an Apply Changes prompt to apply any changes you made before
continuing.
Lower in the configuration window, you can also configure internet Storage Name Service
(iSNS) addresses and CHAP if you need these items in your environment.

Note: The authentication of hosts is optional. By default, it is disabled. The user can
choose to enable CHAP or CHAP authentication, which involves sharing a CHAP
secret between the cluster and the host. If the correct key is not provided by the host,
the IBM FlashSystem system does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.

2. To set the iSCSI IP address for each node, click the Ethernet Ports tab (see Figure 7-16).

Figure 7-16 Entering an iSCSI IP address

Chapter 7. Hosts 389


3. Select the port that is wanted to set the iSCSI IP information. Click Actions → Modify IP
Settings. The dialog box that is shown in Figure 7-17 opens. Make any necessary
changes and click Modify.

Figure 7-17 Modifying IP settings

4. After entering the IP address for a port, click Modify to enable the configuration. After the
changes are successfully applied, click Close.
You can see that iSCSI is enabled for host I/O on the required interfaces by the presence
of “yes” under the Host Attach column (see Figure 7-18).

Figure 7-18 Actions menu on iSCSI ports

390 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. By default, the iSCSI host connection is enabled after setting the IP address. To disable
any interfaces that you do not want to be used for host connections, select Actions →
Modify iSCSI Hosts. The dialog box that is shown in Figure 7-19 opens. Make any
necessary changes and click Modify.

Figure 7-19 Disabling iSCSI host connections

Chapter 7. Hosts 391


6. As a best practice, use the iSCSI network in a separate subnet. It is possible to set a
virtual local area network (VLAN) for the iSCSI traffic. To enable the VLAN, click
Actions → Modify VLAN. The dialog box that is shown in Figure 7-20 opens. Make any
necessary changes and click Modify.

Figure 7-20 Modifying the VLAN

The system is now configured and ready for iSCSI host use. Note the initiator IQN names of
your node canisters (see Figure 7-15 on page 389) because you need them when you add
storage to your host. For more information about creating volumes and mapping them to a
host, see Chapter 6, “Volumes” on page 277.

Creating NVMe hosts


The process for creating NVMe hosts is like creating SCSI FC hosts, but instead of using
WWPNs, you use a host port NQN instead.

Note: To see whether your hosts and IBM FlashSystem system are compatible, see the
SSIC.

392 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To configure an NVMe host, complete the following steps:
1. Go to the host window and click Add Host. Then, in the Host connections menu, select
Fibre Channel (NVMe), as shown in Figure 7-21.

Figure 7-21 Creating an NVMe host

Chapter 7. Hosts 393


2. Enter the host name and NQN of the host, as shown in Figure 7-22.

Figure 7-22 Defining the NQN

3. Click Add. Your host appears in the defined host list, as shown in Figure 7-23.

Figure 7-23 NVMe host created

4. The I/O group NQN must be configured on the host. You can get the I/O group NQN by
running the lsiogrp command, as shown in Example 7-13.

Example 7-13 The lsiogrp command


BM_Storwize:ITSO-V7000:superuser>lsiogrp 0
id 0
name io_grp0
node_count 2

394 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
vdisk_count 11
host_count 0
flash_copy_total_memory 20.0MB
flash_copy_free_memory 20.0MB
remote_copy_total_memory 20.0MB
remote_copy_free_memory 19.9MB
mirroring_total_memory 20.0MB
mirroring_free_memory 19.9MB
raid_total_memory 40.0MB
raid_free_memory 38.7MB
maintenance no
compression_active yes
accessible_vdisk_count 11
compression_supported yes
max_enclosures 21
encryption_supported yes
flash_copy_maximum_memory 2048.0MB
site_id
site_name
fctargetportmode enabled
compression_total_memory 11756.9MB
deduplication_supported yes
deduplication_active no
nqn [Link]:nvme:2145.0000010029C056C2.iogroup0
IBM_Storwize:ITSO-V7000:superuser>

5. You can now configure your NVMe host to use the IBM SAN Volume Controller as a target.

Note: For more information about a compatibility matrix and supported hardware, see
IBM Knowledge Center and the SSIC.

7.5.2 About host clusters


Code level Version 7.7 introduced the concept of a host cluster. A host cluster enables a user
to group individual hosts to form a cluster, which is treated as one entity instead of dealing
with all of the hosts individually in the cluster.

The host cluster is useful for hosts that are participating in a cluster at host OS levels.
Examples are Microsoft Clustering Server, IBM PowerHA, Red Hat Cluster Suite, and
VMware ESX. By defining a host cluster, a user can map one or more volumes to the host
cluster object.

As a result, the volume or set of volumes is mapped to each individual host object that is part
of the host cluster. Importantly, each of the volumes gets mapped with the same SCSI ID to
each host that is part of the host cluster with single command.

Although a host is part of a host cluster, volumes can still be assigned to an individual host in
a non-shared manner. A policy can be devised that can pre-assign a standard set of SCSI IDs
for volumes to be assigned to the host cluster, and another set of SCSI IDs to be used for
individual assignments to hosts.

Chapter 7. Hosts 395


Note: For example, SCSI IDs 0 - 100 for individual host assignment and SCSI IDs that are
greater than 100 can be used for host cluster. By using such a policy, wanted volumes are
not shared, and other volumes can be shared. For example, the boot volume of each host
can be kept private while data and application volumes can be shared.

Creating a host cluster


This section describes how to create a host cluster. It is assumed that individual hosts were
created as described in 7.5.3, “Advanced host administration” on page 399. Complete the
following steps:
1. From the menu on the left side, select Hosts → Host Clusters (see Figure 7-24).

Figure 7-24 Host clusters

2. Click Create Host Cluster to open the wizard that is shown in Figure 7-25.

Figure 7-25 Create Host Cluster

396 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Enter a cluster name, and if applicable choose the ownership group of which the host
nodes are part. You can then select the individual nodes that you want in the cluster object
by pressing Ctrl or Shift and selecting them, as shown in Figure 7-26. Click Next after you
are done.

Figure 7-26 Create Host Cluster: Details

Chapter 7. Hosts 397


4. A summary is displayed in which you can confirm that you selected the correct hosts. Click
Make Host Cluster (see Figure 7-27).

Figure 7-27 Create Host Cluster: Summary

5. After the task completes, click Close to return to the Host Cluster view, where you can see
the cluster that you created (see Figure 7-28).

Figure 7-28 Host Cluster view

Note: The host cluster status depends on its member hosts. One offline or degraded host
sets the host cluster status as degraded.

398 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
From the Host Clusters view, many options are available to manage and configure the host
cluster. These options are accessed by selecting a cluster and clicking Actions (see
Figure 7-29).

Figure 7-29 Host Clusters Actions menu

From the Actions menu, you can perform the following tasks:
View Hosts View the hosts status within the cluster.
Add Hosts or Remove Hosts Add or remove hosts from the cluster.
Rename Host Cluster Rename the host cluster.
Modify Shared Volume Mappings Add or remove volumes that are mapped to all hosts in
the cluster while maintaining the same SCSI ID for all
hosts.
Modify Host Types Change the host type from generic to VVOLs, for
example.
Modify I/O Groups for Hosts Assign or restrict volume access to specific I/O groups.
Edit Throttle Restrict the megabytes per second (MBps) or
input/output operations per second (IOPS) bandwidth
for the host cluster.
View All Throttles Shows all throttling settings, and allows for changing,
deleting, or refining throttle settings.
Delete Host Cluster Delete a host cluster.
Customize Columns Modify which columns are displayed that show the
properties of the host cluster.

7.5.3 Advanced host administration


This section covers host administration, including host modification, host mappings, and
deleting hosts. Basic host creation by using FC and iSCSI connectivity is described in 7.5.1,
“Creating hosts” on page 381.

It is assumed that hosts were created in your IBM Spectrum Virtualize GUI, and that some
volumes are mapped to them.

Chapter 7. Hosts 399


The following sections describe the following functions that are available in the Hosts section
of the IBM Spectrum Virtualize GUI (see Figure 7-30):
򐂰 Hosts and host clusters
򐂰 Ports by Host
򐂰 Mappings
򐂰 Volumes by Host
򐂰 Volumes by Host Cluster

Figure 7-30 IBM Spectrum Virtualize Hosts menu

In the Hosts → Hosts view, three hosts were created and volumes are mapped to them in
our example. If needed, we can now modify these hosts by selecting a host and click Actions,
or right-click the host to see the available tasks (see Figure 7-31).

Figure 7-31 Host actions

400 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7.5.4 Hosts and host clusters
This section describes the various actions that you can take on hosts and host clusters.

Modifying volume mappings


To modify the volumes that are mapped to a specific host, complete the following steps:
1. From the Actions menu, select Modify Volume Mappings (see Figure 7-31 on
page 400). The window that is shown in Figure 7-32 opens. At the upper left, you can
confirm that the correct host is targeted. The list shows all volumes that are mapped to the
selected host. In our example, one volume with SCSI ID 0 is mapped to the host
ITSO-VMHOST-01.

Figure 7-32 Modifying the host volume mapping

Note: When you change volume mappings in a host cluster, the changes apply to the
shared mappings only. For example, when a volume mapping is added to the host
cluster, it becomes a shared mapping among all the hosts within the cluster. When a
mapping is removed, the volume is removed from the shared mappings for the host
cluster. However, you can select specific hosts that retain access to that volume as a
private mapping.

Chapter 7. Hosts 401


2. By selecting a listed volume, you can remove that volume map from the host. However, in
our case, we want to add a volume to our host. Continue by clicking Add Volume
Mapping (see Figure 7-33).
.

Figure 7-33 Volumes selection list

A window with a new list that shows all the volumes opens. You can easily identify whether
a volume that you want to map is mapped to another host, as shown in Figure 7-34.

Figure 7-34 Volume mapped to another host warning

402 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. To map a volume, select it and click Next to map it to the host. The volume is assigned the
next available SCSI ID if you leave System Assign selected. However, by selecting Self
Assign, you can manually set SCSI IDs, as shown in Figure 7-35.

Figure 7-35 Modifying host volume mappings: Assigning a SCSI ID

If you select a SCSI ID that is in use for the host, you cannot proceed. As shown in
Figure 7-35, we selected SCSI ID 0. However, you can see in the right column SCSI ID 0
is allocated. By changing to SCSI ID 1, we can click Next.

Chapter 7. Hosts 403


4. A summary window opens showing the new mapping details (see Figure 7-36). After
confirming that this result is what you planned, click Map Volumes, and then click Close.

Figure 7-36 Confirming the modified mappings

Note: The SCSI ID of the volume can be changed only before it is mapped to a host.
Changing it afterward is not possible unless the volume is unmapped from it again.

Removing private mappings from a host


A host can access only those volumes on your system that are mapped to it. If we want to
remove access to all volumes for one host regardless of how many volumes are mapped to it,
complete the following steps:
1. From the Hosts pane, select the host, and select Actions → Remove Private Mappings
to remove all access that the selected host has to its volumes (see Figure 7-37).

Figure 7-37 Unmapping all volumes action

404 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. You are prompted to confirm the number of mappings to be removed. To confirm your
action, enter the number of volumes to be removed and click Unmap (see Figure 7-38). In
this example, we removed three volume mappings.

Figure 7-38 Confirming the number of mappings to be removed

Unmapping: If you click Unmap, all access for this host to volumes that are controlled
by the system are removed. Ensure that you run the required procedures on your host
OS, such as unmounting the file system, taking the disk offline, or disabling the volume
group, before removing the volume mappings from your host object on the GUI.

3. The changes are applied to the system. Click Close. Figure 7-39 shows that the selected
host no longer has any host mappings.

Figure 7-39 All mappings for host ITSO-VMHOST-01 are removed

Chapter 7. Hosts 405


Duplicating and importing mappings
Volumes that are assigned to one host can be quickly and easily mapped to another host
object. You might make this assignment, for example, when replacing an aging host’s
hardware and want to ensure that the replacement host node can access the same set of
volumes as the old host.

You can accomplish this process in two ways: by duplicating the mappings on the existing
host object to the new host object, or by importing the host mappings to the new host. To
duplicate the mappings, complete the following steps:
1. To duplicate an existing host mapping, select the host that you want to duplicate and click
Actions → Duplicate Volume Mappings (see Figure 7-40). In our example, we duplicate
the volumes that are mapped to host ITSO-VMHOST-01 to the new host ITSO-VMHOST-02.

Figure 7-40 Duplicate Volume Mappings menu

406 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. The Duplicate Mappings window opens. Select a listed target host object to which you
want to map all the existing source host volumes and click Duplicate (see Figure 7-41).

Figure 7-41 Duplicate mappings window

Note: You can duplicate mappings only to a host that does not have volumes mapped.

3. After the task completion appears, we can verify the new mappings on the new host
object. From the Hosts menu, double-click the target host, or right-click the target host
and select Properties.

Chapter 7. Hosts 407


4. Click the Mapped Volumes tab and verify that the required volumes were mapped to the
new host (see Figure 7-42).

Figure 7-42 Host Details: New mappings on the target host

The same volume mappings can be accomplished by importing existing hosts mappings to
the new host. Complete the following steps:
1. Select the new host without any mapped volumes and click Actions → Import Volume
Mappings (see Figure 7-43).

Figure 7-43 Hosts Actions: Import Volume Mappings menu

408 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. The Import Mappings window opens. Select the source host from which you want to
import the volume mappings. As shown in Figure 7-44, we select the host
ITSO-VMHOST-01 and click Import.

Figure 7-44 Import volume mappings source host selection

3. After the task completes, verify that the mappings are as expected by going to the Hosts
menu (see Figure 7-31 on page 400), right-clicking the target host, and selecting
Properties. Then, click the Mapped Volumes tab and verify that the required volumes
were mapped to the new host (see Figure 7-42 on page 408).

Note: You can import mappings only from a source host that is in the same ownership
group as your target host. If they are not, the import fails with “The command failed
because the objects are in different ownership groups” message.

Chapter 7. Hosts 409


Renaming a host
To rename a host, complete the following steps:
1. Select the host, right-click it, and select Rename (see Figure 7-45).

Figure 7-45 Renaming a host

2. Enter a new name and click Rename (see Figure 7-46). If you click Reset, the changes
are reset to the original host name.

Figure 7-46 Rename Host window

3. After the changes are applied to the system, click Close.

410 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Removing a host
To remove a host object definition, complete the following steps:
1. From the Hosts pane, select the host and right-click it or click Actions → Remove (see
Figure 7-47).

Figure 7-47 Removing a host

2. Confirm that the window displays the correct list of hosts that you want to remove by
entering the number of hosts to remove and clicking Delete (see Figure 7-48).

Figure 7-48 Confirming the removal of the host

Chapter 7. Hosts 411


3. If the host that you are removing has volumes that are mapped to it, force the removal by
selecting the Remove the hosts even if volumes are mapped to them check box in the
lower part of the window. By selecting this option, the host is removed and it no longer can
access any volumes on this system.
4. After the task is completed, click Close.

Host properties
To view a host object’s properties, complete the following steps:
1. From the IBM Spectrum Virtualize GUI Hosts pane, select a host, and right-click it or click
Actions → Properties (see Figure 7-49).

Figure 7-49 Host properties

The Host Details window opens (see Figure 7-50).

Figure 7-50 Host properties overview

412 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The Host Details window shows an overview of the selected host properties. It includes
three tabs: Overview, Mapped Volumes, and Port Definitions. The Overview tab is shown
in Figure 7-50 on page 412.
2. Select the Show Details slider at the bottom to see more information about the host.
3. Click Edit to change the host properties (see Figure 7-51).

Figure 7-51 Editing the host properties

In the window that is shown in Figure 7-51, you can modify the following properties:
– Host Name: Change the host name.
– Host Type: Change this setting if you are going to attach HP/UX, OpenVMS, or TPGS
hosts.
– I/O group: Host can access volumes that are mapped from selected I/O groups.
– iSCSI CHAP Secret: Enter or change the iSCSI CHAP secret if this host is using iSCSI.
4. When you finish the changes, click Save to apply them. The editing window closes.

Chapter 7. Hosts 413


The Mapped Volumes tab shows a summary of which volumes are mapped with which
SCSI ID and unique identifier (UID) to this host (see Figure 7-52). The Show Details slider
does not show any more information for this list.

Figure 7-52 Mapped volumes tab

The Port Definitions tab shows the configured host ports of a host and provides status
information about them as shown in Figure 7-53 on page 415. This tab is also where you
can find the NQN of your NVMe host.

414 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 7-53 Port definitions

This window offers the option to Add or Delete Port on the host, as described in 7.5.5,
“Ports by Host” on page 415.
5. Click Close to close the Host Details window.

7.5.5 Ports by Host

To configure host ports, complete the following steps:


1. From the left menu, select Hosts → Ports by Host to open the associated pane (see
Figure 7-54).

Figure 7-54 Ports by Host

Chapter 7. Hosts 415


2. A window with a list of all the hosts opens. The function icons indicate whether the host is
FC, iSCSI, or serial-attached SCSI (SAS) attached. The port details of the selected host
are shown to the right. You can add a host object by clicking Add Host. If you click Host
Actions (see Figure 7-55), the tasks that are described in “Modifying volume mappings”
on page 401 can be selected.

Figure 7-55 Ports by Host actions

Adding a Fibre Channel or iSCSI host port


To add a host port, complete the following steps:
1. Select the host.
2. Click Add (see Figure 7-56) and select one of the following options:
a. Fibre Channel Port (see “Adding a Fibre Channel port” on page 417)
b. iSCSI Port (see “Adding an iSCSI host port” on page 420)

Figure 7-56 Add host ports

416 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Adding a Fibre Channel port
To add an FC port, complete the following steps:
1. Click Fibre Channel Port (see Figure 7-56 on page 416). The Add Fibre Channel (SCSI)
Ports window opens (see Figure 7-57).

Figure 7-57 Add Fibre Channel (SCSI) Ports window

2. Click the drop-down menu to display a list of all discovered FC WWPNs. If the WWPN of
your host is not available in the menu, enter it manually or check the SAN zoning to ensure
that connectivity is configured. Then, rescan storage from the host.

Chapter 7. Hosts 417


3. Select the WWPN that you want to add and click Add Port to List (see Figure 7-58).

Figure 7-58 Adding a port to the list

This step can be repeated to add other ports to the host.


4. To add an offline port (if the WWPN of your host is not available in the drop-down menu),
manually enter the WWPN of the port into the Fibre Channel Ports field and click Add
Port to List.

418 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The port is unverified (see Figure 7-59) because it is not logged on to the
IBM FlashSystem system. The first time that it logs on, its state is automatically changed
to online, and the mapping is applied to this port.

Figure 7-59 Unverified port

5. To remove a port from the list, click the red X next to the port. In this example, we delete
the manually added FC port so that only the detected port remains.
6. Click Add Ports to Host to apply the changes and click Close.

Chapter 7. Hosts 419


Adding an iSCSI host port
To add an iSCSI host port, complete the following steps:
1. Click iSCSI Port (see Figure 7-56 on page 416). The Add iSCSI Ports window opens (see
Figure 7-60).

Figure 7-60 Add iSCSI (SCSI) host ports

420 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Enter the initiator name of your host (see Figure 7-61) and click Add Port to List.

Figure 7-61 Entering the initiator name

3. Click Add Ports to Host to apply the changes to the system, and then click Close.

Chapter 7. Hosts 421


Adding a NVMe host port
To add an NVMe host port, complete the following steps:
1. Click Add, and then click Fibre Channel (NVMe) Port. The Add Fibre Channel (NVMe)
Ports window opens, as shown in Figure 7-62.

Figure 7-62 Add Fibre Channel (NVMe) Ports window

422 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Enter the NQN, and then click Add Ports to Host, as shown in Figure 7-63.

Figure 7-63 Enter the NVMe Qualified Name

Deleting a host port


To delete a host port, complete the following steps:
1. Highlight the host port and right-click it or click Delete Port (see Figure 7-64).

Figure 7-64 Deleting the host port

Chapter 7. Hosts 423


You can also press the Ctrl or Shift keys while selecting the ports to select multiple ports
for deletion (see Figure 7-65).

Figure 7-65 Deleting several host ports

2. Click Delete and confirm the number of host ports that you want to remove by entering
that number in the Verify field (see Figure 7-66).

Figure 7-66 Entering the number of host ports to delete

3. Click Delete to apply the changes, and then click Close.

Note: Deleting FC (including NVMe) and iSCSI ports is done the same way.

424 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7.5.6 Mappings
To see an overview of the host mappings, click Hosts → Mappings, as shown in Figure 7-67.

Figure 7-67 Host mappings

A list of all volume mappings is displayed, as shown in Figure 7-68.

Figure 7-68 Mappings list

This window lists all hosts and volumes. This example shows that the host ITSO-VMHOST-01
has two mapped volumes, and their associated SCSI ID, Volume Name, and Volume Unique
Identifier. If you have more than one caching I/O group, you also see which volume is handled
by which I/O group.

Chapter 7. Hosts 425


If you select one line and click Actions (see Figure 7-69), the following tasks are available:
򐂰 Unmap Volumes
򐂰 Properties (Host)
򐂰 Properties (Volume)

Figure 7-69 Host Mappings Actions menu

Unmapping a volume
This action removes the mappings for all selected entries. From the Actions menu that is
shown in Figure 7-69, select one or more lines (while holding the Ctrl key). Click Unmap
Volumes. Confirm how many volumes are to be unmapped by entering that number in the
Verify field (see Figure 7-70), and then click Unmap.

Figure 7-70 Unmapping selected volumes

Properties (Host)
Selecting an entry and clicking Properties (Host), as shown in Figure 7-69, opens the host
Properties window. The contents of this window are described in “Host properties” on
page 412.

426 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Properties (Volume)
Selecting an entry and clicking Properties (Volume), as shown in Figure 7-69 on page 426,
opens the Volume Properties view. The contents of this window are described in Chapter 6,
“Volumes” on page 277.

7.5.7 Volumes by Host


To see an overview of the host mappings, click Hosts → Volumes by Host, as shown in
Figure 7-71.

Figure 7-71 Volumes by Host

A window that shows a list of volumes mapped per host opens, as shown in Figure 7-72. This
window differs from the Host Mappings window because you see which volumes are mapped
by host rather than by type of mapping.

Figure 7-72 Volumes by Host actions menu

You can also filter by type of volume by selecting an option from the volumes menu. The
options are as follows:
򐂰 All Volumes
򐂰 Thin-Provisioned Volumes

Chapter 7. Hosts 427


򐂰 Compressed Volumes
򐂰 Deduplicated Volumes

Finally, you can create and map a new volume by using this window. Select Create Volumes
and the volume creation window opens, as shown in Figure 7-73.

Figure 7-73 Create Volumes window

After you choose the volume settings, you can click Create and Map, which opens the Create
Mapping window, as shown in Figure 7-74 on page 429.

428 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 7-74 Create Mapping window

Select the host to which you want to map the new volume. If you want to map the volume to
multiple hosts, then hold down Shift Or Ctrl and select the hosts, and then click Next, which
brings you to a confirmation window. Click Map Volumes, which completes the process. You
can then see your new volume listed under the host to which you mapped it, as shown in
Figure 7-75.

Figure 7-75 New volume mapped to host

Chapter 7. Hosts 429


Note: If you configured object-based access control (OBAC) on your system, then when
creating and mapping a volume in this manner the following items must be in the same
ownership group:
򐂰 User that you are logged in as
򐂰 Pool to which you are creating the volume
򐂰 Host to which you are mapping the volume

If this is not the case, then when you attempt to map the new volume to the host as shown
in Figure 7-74 on page 429, the host does not appear in the list of hosts that are available
to create the mapping.

7.5.8 Volumes by Host Cluster


To see an overview of the host mappings, select Hosts → Volumes by Host Cluster, as
shown in Figure 7-76.

Figure 7-76 Volumes by Host Cluster

This window has the same options as the Volumes by Host option (see 7.5.7, “Volumes by
Host” on page 427) except it is for host clusters rather than single hosts. If you have no host
clusters that are defined in your system, no host objects are displayed.

7.6 Performing hosts operations by using the command-line


interface
This section describes some of the host-related actions that can be taken within the system
by using the command-line interface (CLI).

7.6.1 Creating a host by using the CLI


This section describes how to create FC and iSCSI hosts by using the IBM Spectrum
Virtualize CLI. It is assumed that hosts are prepared for attachment as noted in the guidelines
in the “Host Attachment” section of IBM Knowledge Center.

430 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Creating Fibre Channel hosts
To create an FC host, complete the following steps:
1. Rescan the SAN on the IBM FlashSystem system by running the detectmdisk command
(see Example 7-14).

Example 7-14 Rescanning the SAN


IBM_Storwize:ITSO-V7000:superuser>detectmdisk

Note: The detectmdisk command does not return any response.

If zoning was implemented correctly, any new WWPNs are discovered by the
IBM FlashSystem system after running the detectmdisk command.
2. List the candidate WWPNs and identify the WWPNs belonging to the new host, as shown
in Example 7-15.

Example 7-15 Available WWPNs


IBM_Storwize:ITSO-V7000:superuser>lsfcportcandidate
fc_WWPN
2100000E1E09E3E9
2100000E1E30E5E8
2100000E1E30E60F
2100000E1EC2E5A2
2100000E1E30E597
2100000E1E30E5EC

3. Run the mkhost command with the required parameters, as shown in Example 7-16.

Example 7-16 Host creation


IBM_Storwize:ITSO-V7000:superuser>mkhost -name ITSO-VMHOST-03 -fcwwpn
2100000E1E30E597:2100000E1E30E5EC
Host, id [3], successfully created
IBM_Storwize:ITSO-V7000:superuser>

Creating iSCSI hosts


Before you create an iSCSI host in an IBM FlashSystem system, you must discover the IQN
address of the host. To find the IQN of the host, see your host OS-specific documentation.

To create a host, complete the following steps:


1. Create the iSCSI host by running the mkhost command (see Example 7-17).

Example 7-17 Creating an iSCSI host by running the mkhost command


IBM_Storwize:ITSO-V7000:superuser>mkhost -iscsiname
[Link]:e6ff477b58 -name RHEL-Host-06
Host, id [4], successfully created
IBM_Storwize:ITSO-V7000:superuser>

Chapter 7. Hosts 431


2. The iSCSI host can be verified by running the lshost command, as shown in
Example 7-18.

Example 7-18 Verifying the iSCSI host by running the lshost command
IBM_Storwize:ITSO-V7000:superuser>lshost 4
id 4
name RHEL-Host-06
port_count 1
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 4
status offline
site_id
site_name
host_cluster_id
host_cluster_name
protocol scsi
status_policy redundant
status_site all
iscsi_name [Link]:e6ff477b58
node_logged_in_count 0
state offline
owner_id 0
IBM_Storwize:ITSO-V7000:superuser>

Note: When the host is initially configured, the default authentication method is set to no
authentication, and no CHAP secret is set. To set a CHAP secret for authenticating the
iSCSI host with the IBM FlashSystem system, run the chhost command with the
chapsecret parameter. If you must display a CHAP secret for a defined server, run the
lsiscsiauth command. The lsiscsiauth command lists the CHAP secret that is
configured for authenticating an entity to the IBM FlashSystem system.

FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.

Creating NVMe hosts


Before you create an NVMe host, you must know the NQN address of the host. See your host
OS-specific documentation to find the NQN of the host.

Create a host by completing the following steps:


1. Create the NVMe host by running the mkhost command, as shown in Example 7-19.

Example 7-19 The mkhost command


IBM_Storwize:ITSO-V7000:superuser>svctask mkhost -name NVMe-Host-01 -nqn
[Link]:nvme:nvm-nvmehost01-edf223876 -protocol nvme -type
generic
Host, id [6], successfully created
IBM_Storwize:ITSO-V7000:superuser>

2. The NVMe host can be verified by running the lshost command, as shown in
Example 7-20 on page 433.

432 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Example 7-20 The lshost command
IBM_Storwize:ITSO-V7000:superuser>lshost 6
id 6
name NVMe-Host-01
port_count 1
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 4
status offline
site_id
site_name
host_cluster_id
host_cluster_name
protocol nvme
status_policy redundant
status_site all
nqn [Link]:nvme:nvm-nvmehost01-edf223876
node_logged_in_count 0
state offline
owner_id 0
IBM_Storwize:ITSO-V7000:superuser>

Note: If you have OBAC set up, you can use the -ownershipgroup parameter when
creating a host to add the new host to a pre-configured ownership group. You can use
either the ownership group name or ID. Here is an example command:
svctask mkhost -name NVMe-Host-01 -nqn
[Link]:nvme:nvm-nvmehost01-edf223876 -protocol nvme -type
generic -ownershipgroup ownershipgroup0

7.6.2 Advanced host administration by using the CLI


This section describes the following advanced host operations, which can be carried out by
using the CLI:
򐂰 Mapping a volume to a host
򐂰 Mapping a volume already mapped to a different host
򐂰 Unmapping a volume from a host
򐂰 Renaming a host
򐂰 Host properties

Mapping a volume to a host


To map a volume, complete the following steps:
1. To map a volume to a host, run the mkvdiskhostmap command, as shown in Example 7-21.

Example 7-21 Mapping a volume


IBM_Storwize:ITSO-V7000:superuser>mkvdiskhostmap -host RHEL_HOST -scsi 0
RHEL_VOLUME
Virtual Disk to Host map, id [0], successfully created

Chapter 7. Hosts 433


2. The volume mapping can then be checked by running the lshostvdiskmap command
against that host, as shown in Example 7-22.

Example 7-22 Checking the mapped volume


IBM_Storwize:ITSO-V7000:superuser>lshostvdiskmap RHEL_HOST
id name SCSI_id vdisk_id vdisk_name vdisk_UID
IO_group_id IO_group_name mapping_type host_cluster_id host_cluster_name
7 RHEL_HOST 0 109 RHEL_VOLUME 600507680C81825B0000000000000154 0
io_grp0 private

Mapping a volume that is already mapped to a different host


To map a volume to another host that is mapped to one host, complete the following steps:
1. Run the mkvdiskhost -force command, as shown in Example 7-23.

Example 7-23 Mapping the same volume to a second host


IBM_Storwize:ITSO-V7000:superuser>svctask mkvdiskhostmap -force -host
RHEL-Host-06 -scsi 0 Linux1
Virtual Disk to Host map, id [0], successfully created
IBM_Storwize:ITSO-V7000:superuser>

Note: The volume RHEL_VOLUME is mapped to both of the hosts by using the same SCSI
ID. Typically, that is the requirement for most host-based clustering software, such as
Microsoft Clustering Service, IBM PowerHA, and VMware ESX clustering.

2. The volume RHEL_VOLUME is mapped to two hosts (RHEL_HOST and iSCSI_RHEL_HOST), and
can be seen by running the lsvdiskhostmap command, as shown in Example 7-24.

Example 7-24 Ensuring that the same volume is mapped to multiple hosts
IBM_Storwize:ITSO-V7000:superuser>lsvdiskhostmap Linux1
id name SCSI_id host_id host_name vdisk_UID
IO_group_id IO_group_name mapping_type host_cluster_id host_cluster_name
protocol
11 Linux1 0 3 RHEL-Host-05 60050768019C8514440000000000003C 0
io_grp0 private scsi
11 Linux1 0 4 RHEL-Host-06 60050768019C8514440000000000003C 0
io_grp0 private scsi
IBM_Storwize:ITSO-V7000:superuser>

Unmapping a volume from a host


To unmap a volume from the host, run the rmvdiskhostmap command, as shown in
Example 7-25.

Example 7-25 Unmapping a volume from a host


IBM_Storwize:ITSO-V7000:superuser>rmvdiskhostmap -host RHEL-Host-06 Linux1
IBM_Storwize:ITSO-V7000:superuser>

Important: Before unmapping a volume from a host on an IBM FlashSystem system,


ensure that the host side action is completed on that volume by using the respective host
OS platform commands, such as unmounting the file system, or removing the volume or
volume group. Otherwise, it can result in data corruption.

434 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Renaming a host
To rename a host definition, run the chhost -name command, as shown in Example 7-26. In
this example, the host RHEL_HOST is renamed to FC_RHEL_HOST.

Example 7-26 Renaming a host


IBM_Storwize:ITSO-V7000:superuser>chhost -name RHEL-Host-07 RHEL-Host-06
IBM_Storwize:ITSO-V7000:superuser>

Removing a host
To remove a host from the IBM FlashSystem system, run the rmhost command, as shown in
Example 7-27.

Example 7-27 Removing a host


IBM_Storwize:ITSO-V7000:superuser>rmhost RHEL-Host-07
IBM_Storwize:ITSO-V7000:superuser>

Note: Before removing a host from an IBM FlashSystem system, ensure that all of the
volumes are unmapped from that host, as shown in Example 7-25.

Host properties
To get more information about a host, run the lshost command with the hostname or host id
as a parameter, as shown in Example 7-28.

Example 7-28 Host details


IBM_Storwize:ITSO-V7000:superuser>lshost ITSO-VMHOST-01
id 0
name ITSO-VMHOST-01
port_count 2
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 4
status offline
site_id
site_name
host_cluster_id 0
host_cluster_name ITSO-ESX-Cluster-01
protocol scsi
status_policy redundant
status_site all
WWPN 2100000E1E30E597
node_logged_in_count 0
state offline
WWPN 2100000E1E30E5E8
node_logged_in_count 0
state offline
owner_id 0
IBM_Storwize:ITSO-V7000:superuser>

Chapter 7. Hosts 435


Note: In Version [Link], the new status_policy property was added to each host
because there are situations where the status field sometimes showed misleading host
statuses, for example, it might show online when the host had no access to volumes (due
to port masking or NPIV mode). The new status_policy property has two potential
values:
򐂰 Complete: The default policy when a host is created. It uses the existing algorithm.
Existing hosts on systems that are upgraded to a new code level have this policy set.
򐂰 Redundant: This policy changes the meaning of Online and Degraded in the status
property:
– Online indicates redundant connectivity, that is, enough host ports are logged in to
enough nodes so that the removal of a single node or a single host port still enables
that host to access all of its volumes.
– Degraded indicates non-redundant connectivity, that is, a state in which a single point
of failure (SPOF) prevents a host from accessing at least some of its volumes.

These options can be changed only by running the chhost command. When the host is
created by running mkhost, the default policy of redundant is set.

7.6.3 Adding and deleting a host port by using the CLI


This section describes adding and deleting a host port to and from your IBM FlashSystem
system.

Adding ports to a defined host


To add ports to a defined host, complete the following steps:
1. Depending on how your host is connected, do one of the following tasks:
– If an HBA is added, or if a network interface controller (NIC) is added to a server that is
defined within the IBM FlashSystem system, run the addhostport command to add the
new port definitions to the host configuration.
– If the host is connected through SAN with FC, and if the WWPN is zoned to the system,
run the lshbaportcandidate command to compare it with the information that is
available from the server administrator, as shown in Example 7-29.

Example 7-29 Listing the newly available WWPN


IBM_Storwize:ITSO-V7000:superuser>lsfcportcandidate
fc_WWPN
2100000E1E09E3E9
2100000E1E30E5E8
2100000E1E30E60F
2100000E1EC2E5A2
IBM_Storwize:ITSO-V7000:superuser>

2. Use host or SAN switch utilities to verify whether the WWPN matches the information for
the new WWPN. If the WWPN matches, run the addhostport command to add the port to
the host, as shown in Example 7-30 on page 437.

436 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Example 7-30 Adding the newly discovered WWPN to the host definition
IBM_Storwize:ITSO-V7000:superuser>addhostport -hbawwpn
2100000E1E09E3E9:2100000E1E30E5E8 ITSO-VMHOST-01
IBM_Storwize:ITSO-V7000:superuser>

This command adds the WWPNs 2100000E1E09E3E9 and 2100000E1E30E5E8 to the


ITSO-VMHOST-01 host.
3. If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or
HBAs and use the -force flag to create the host, as shown in Example 7-31.

Example 7-31 Adding a WWPN to the host definition by using the -force option
IBM_Storwize:ITSO-V7000:superuser>addhostport -hbawwpn 2100000000000001 -force
ITSO-VMHOST-01
IBM_Storwize:ITSO-V7000:superuser>

This command forces the addition of the WWPN 2100000000000001 to the host
ITSO-VMHOST-01.

Note: WWPNs are not case-sensitive within the CLI.

4. Verify the host port count again by running the lshost command. The host
ITSO-VMHOST-01 has an updated port count of 3, as shown in Example 7-32.

Example 7-32 Host with updated port count


IBM_Storwize:ITSO-V7000:superuser>lshost
id name port_count iogrp_count status site_id site_name
host_cluster_id host_cluster_name protocol
0 ITSO-VMHOST-02 0 2 offline
scsi
1 ITSO-VMHOST-01 3 2 degraded
scsi
2 RHEL-Host-01 1 2 offline
scsi
3 ITSO-VMHOST-03 2 2 online
scsi
IBM_Storwize:ITSO-V7000:superuser>

If the host uses iSCSI as a connection method, the new iSCSI IQN ID is used to add the
port. Unlike FC-attached hosts, with iSCSI, available candidate ports cannot be checked.
5. After getting the other iSCSI IQN, run the addhostport command, as shown in
Example 7-33.

Example 7-33 Adding an iSCSI port to the defined host


IBM_Storwize:ITSO-V7000:superuser>addhostport -iscsiname
[Link]:e6ddffaab567 RHEL-Host-05
IBM_Storwize:ITSO-V7000:superuser>

If the host uses NVMe as a connection method, the new NVMe NQN ID is used to add the
port. Unlike SCSI FC-attached hosts, with NVMe, available candidate ports cannot be
checked.

Chapter 7. Hosts 437


6. After getting the other NVMe NQN, run the addhostport command, as shown in
Example 7-34.

Example 7-34 The addhostport command


IBM_Storwize:ITSO-V7000:superuser>addhostport -nqn
[Link]:875adad3345 RHEL-Host-08
IBM_Storwize:ITSO-V7000:superuser>

Deleting ports from a defined host


If a mistake is made while adding a port, or if an HBA is removed from a server that is defined
within the IBM FlashSystem system, run the rmhostport command to remove WWPN
definitions from a host.

Before removing the WWPN, ensure that it is the correct WWPN by completing the following
steps:
1. Run the lshost command, as shown in Example 7-35.

Example 7-35 Running the lshost command to check the WWPNs


IBM_2145:ITSO-SV1:superuser>lshost ITSO-VMHOST-01
id 0
name ITSO-VMHOST-01
port_count 2
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 4
status online
site_id
site_name
host_cluster_id 0
host_cluster_name ITSO-ESX-Cluster-01
protocol scsi
status_policy redundant
status_site all
WWPN 2100000E1E30E597
node_logged_in_count 2
state online
WWPN 2100000E1E30E5E8
node_logged_in_count 2
state online
owner_id 0
IBM_2145:ITSO-SV1:superuser>

2. When you discover the WWPN or iSCSI IQN that must be deleted, run the rmhostport
command to delete the host port, as shown in Example 7-36.

Example 7-36 Running the rmhostport command to remove a WWPN


IBM_Storwize:ITSO-V7000:superuser>rmhostport -fcwwpn 2100000E1E30E597
ITSO-VMHOST-01
IBM_Storwize:ITSO-V7000:superuser>

3. To remove the iSCSI IQN, run the rmhostport command again, as shown in Example 7-37
on page 439.

438 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Example 7-37 Removing the iSCSI port from the host
IBM_Storwize:ITSO-V7000:superuser>rmhostport -iscsiname
[Link]:e6ddffaab567 RHEL-Host-05
IBM_Storwize:ITSO-V7000:superuser>

4. To remove the NVMe NQN, run the rmhostport command again, as shown in
Example 7-38.

Example 7-38 Removing the NQN port from the host


IBM_Storwize:ITSO-V7000:superuser>rmhostport -nqn
[Link]:875adad3345 RHEL-Host-08
IBM_Storwize:ITSO-V7000:superuser>

Note: Multiple ports can be removed at one time by using the separator or colon (:)
between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

7.6.4 Host cluster operations


This section describes the following host cluster operations that can be performed by using
the CLI:
򐂰 Creating a host cluster (mkhostcluster).
򐂰 Adding a member to the host cluster (addhostclustermember).
򐂰 Listing a host cluster (lshostcluster).
򐂰 Listing a host cluster member (lshostclustermember).
򐂰 Assigning a volume to the host cluster (mkvolumehostclustermap).
򐂰 Listing a host cluster for mapped volumes (lshostclustervolumemap).
򐂰 Unmapping a volume from the host cluster (rmvolumehostclustermap).
򐂰 Removing a host cluster member (rmhostclustermember).
򐂰 Removing the host cluster (rmhostcluster).

Creating a host cluster


To create a host cluster, run the mkhostcluster command, as shown in Example 7-39.

Example 7-39 Creating a host cluster by running mkhostcluster


IBM_Storwize:ITSO-V7000:superuser>mkhostcluster -name ITSO-ESX-Cluster-01
Host cluster, id [0], successfully created.
IBM_Storwize:ITSO-V7000:superuser>

Note: While creating the host cluster, if you want it to inherit the volumes that are mapped
to a particular host, use the -seedfromhost flag option. Any volume mapping that does not
need to be shared can be kept private by using the -ignoreseedvolume flag option.

Chapter 7. Hosts 439


Adding a host to a host cluster
After creating a host cluster, a host or a list of hosts can be added by running the
addhostclustermember command, as shown in Example 7-40.

Example 7-40 Adding a host or hosts to a host cluster


IBM_Storwize:ITSO-V7000:superuser>addhostclustermember -host ITSO-VMHOST-01:ITSO-VMHOST-02
ITSO-ESX-Cluster-01
IBM_Storwize:ITSO-V7000:superuser>

In Example 7-40, the hosts ITSO-VMHOST-01 and ITSO-VMHOST-02 were added as part of host
cluster ITSO-ESX-Cluster-01.

Listing the host cluster member


To list the host members that are part of a particular host cluster, run the
lshostclustermember command, as shown in Example 7-41.

Example 7-41 Listing host cluster members by running the lshostclustermember command
IBM_Storwize:ITSO-V7000:superuser>lshostclustermember ITSO-ESX-Cluster-01
host_id host_name status type site_id site_name
0 ITSO-VMHOST-01 offline generic
4 ITSO-VMHOST-02 offline generic
IBM_Storwize:ITSO-V7000:superuser>

Mapping a volume to a host cluster


To map a volume to a host cluster so that it automatically is mapped to member hosts, run the
mkvolumehostclustermap command, as shown in Example 7-42.

Example 7-42 Mapping a volume to a host cluster


IBM_Storwize:ITSO-V7000:superuser>mkvolumehostclustermap -hostcluster ITSO-ESX-Cluster-01
VMware1
Volume to Host Cluster map, id [0], successfully created
IBM_Storwize:ITSO-V7000:superuser>

Note: When a volume is mapped to a host cluster, that volume is mapped to all of the
members of the host cluster with the same SCSI_ID.

Listing the volumes that are mapped to a host cluster


To list the volumes that are mapped to a host cluster, run the lshostclustervolumemap
command, as shown in Example 7-43.

Example 7-43 Listing volumes that are mapped to a host cluster by running lshostclustervolumemap
IBM_Storwize:ITSO-V7000:superuser>lshostclustervolumemap ITSO-ESX-Cluster-01
id name SCSI_id volume_id volume_name volume_UID IO_group_id IO_group_name protocol
0 ITSO-ESX-Cluster-01 0 8 VMware1 60050768019C85144400000000000039 0 io_grp0 scsi
0 ITSO-ESX-Cluster-01 1 9 VMware2 60050768019C8514440000000000003A 0 io_grp0 scsi
0 ITSO-ESX-Cluster-01 2 10 VMware3 60050768019C8514440000000000003B 0 io_grp0 scsi
IBM_Storwize:ITSO-V7000:superuser>

Note: You can run the lshostvdiskmap command against each host that is part of a host
cluster to ensure that the mapping type for the shared volume is shared, and that the
non-shared volume is private.

440 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Removing a volume mapping from a host cluster
To remove a volume mapping to a host cluster, run the rmvolumehostclustermap command,
as shown in Example 7-44.

Example 7-44 Removing a volume mapping


IBM_Storwize:ITSO-V7000:superuser>rmvolumehostclustermap -hostcluster ITSO-ESX-Cluster-01
VMware3
IBM_Storwize:ITSO-V7000:superuser>

In Example 7-44, volume VMware3 is unmapped from the host cluster ITSO-ESX-Cluster-01.
The current volume mapping can be checked to ensure that it is unmapped, as shown in
Example 7-43 on page 440.

Note: To specify the host or hosts that acquire private mappings from the volume that is
being removed from the host cluster, use the -makeprivate flag.

Removing a host cluster member


To remove a host cluster member, run the rmhostclustermember command, as shown in
Example 7-45.

Example 7-45 Removing a host cluster member


IBM_2145:ITSO-SV1:superuser>rmhostclustermember -host ITSO-VMHOST-02 -removemappings
ITSO-ESX-Cluster-01
IBM_2145:ITSO-SV1:superuser>

In Example 7-45, the host ITSO-VMHOST-02 was removed as a member from the host cluster
ITSO-ESX-Cluster-01, along with the associated volume mappings because the
-removemappings flag was specified.

Removing a host cluster


To remove a host cluster, run the rmhostcluster command, as shown in Example 7-46.

Example 7-46 Removing a host cluster


IBM_Storwize:ITSO-V7000:superuser>rmhostcluster -removemappings ITSO-ESX-Cluster-01
IBM_Storwize:ITSO-V7000:superuser>

Using the -removemappings flag also causes the system to remove any host mappings to
volumes that are shared. The mappings are deleted before the host cluster is deleted.

Note: To keep the volumes mapped to the host objects even after the host cluster is
deleted, use the -keepmappings flag instead of -removemappings for the rmhostcluster
command. When -keepmappings is specified, the host cluster is deleted, but the volume
mapping to the host becomes private instead of shared.

Adding a host or host cluster to an ownership group


To add a host or a host cluster to an ownershipgroup, run the chhost or chhostcluster
command with the -ownershipgroup parameter, as shown in Example 7-47.

Example 7-47 Adding a host cluster to an ownershipgroup


IBM_Storwize:ITSO-V7000:JackUser>chhostcluster -ownershipgroup 1 0
IBM_Storwize:ITSO-V7000:JackUser>

Chapter 7. Hosts 441


Note: You must specify the ID of the ownership group to which you want to add the host,
and then specify the ID of the host or host cluster. So, the command in Example 7-47 adds
host cluster ID 0 to ownershipgroup ID 1.

Removing a host or host cluster from an ownership group


To remove a host or a host cluster from an ownershipgroup, run the chhost or chhostcluster
command with the -noownershipgroup parameter, as shown in Example 7-48.

Example 7-48 Removing a host cluster from an ownership group


IBM_Storwize:ITSO-V7000:JackUser>chhostcluster -noownershipgroup 1
IBM_Storwize:ITSO-V7000:JackUser>

This command removes host cluster 1 from the ownershipgroup that is assigned to it.

442 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
8

Chapter 8. Storage migration


This chapter describes the steps that are involved in migrating data from an external storage
controller to an IBM FlashSystem system by using the storage migration wizard. Migrating
data from other storage systems to the system consolidates storage. It also enables
IBM Spectrum Virtualize features, such as Easy Tier, thin provisioning, compression,
encryption, storage replication, and the GUI, which can be used across all volumes.

Storage migration uses the volume mirroring function to enable reads and writes during the
migration, which minimizes disruption and downtime. After the migration completes, the
existing controller can be retired.

The system supports migration through Fibre Channel (FC) and internet Small Computer
Systems Interface (iSCSI) connections.

This chapter includes the following topics:


򐂰 8.1, “Storage migration overview” on page 444
򐂰 8.2, “Storage migration wizard” on page 446

Note: This chapter does not cover migration outside of the storage migration wizard. To
migrate data outside of the wizard, you must use the Import option in the GUI. This
chapter also does not cover virtualization of external storage. For more information about
these topics, see Chapter 5, “Storage pools” on page 221.

© Copyright IBM Corp. 2020. All rights reserved. 443


8.1 Storage migration overview
To migrate data from a storage controller to the system, it is necessary to use the built-in
external virtualization capability. This capability places external connected logical units (LUs)
under the control of the system, which acts as a proxy while hosts continue to access them.
The volumes are then fully virtualized in the system.

Attention: The system does not require a license for its own control and expansion
enclosures. However, a license is required for any external systems that are being
virtualized, either based on storage capacity units (SCU) or based on the number of
enclosures. Data can be migrated from storage systems to your system by using the
external virtualization function within 45 days of purchase of the system without purchase
of a license. After 45 days, any ongoing use of the external virtualization function requires
a license.

Set the license temporarily during the migration process to prevent messages that indicate
that you are in violation of the license agreement from being sent. When the migration is
complete, or after 45 days, reset the license to its original limit or purchase a new license.

Consider the following points about the storage migration process:


򐂰 Typically, storage controllers divide storage into many Small Computer System Interface
(SCSI) LUs that are presented to hosts.
򐂰 I/O to the LUs must be stopped and changes made to the mapping of the external storage
controller LUs and to the fabric or iSCSI configuration so that the original LUs are
presented directly to the system and not to the hosts anymore. The system discovers the
external LUs as unmanaged managed disks (MDisks).
򐂰 The unmanaged MDisks are imported to the system as image mode volumes and placed
into a temporary storage pool. This storage pool is now a logical container for the LUs.
򐂰 Each MDisk has a one-to-one mapping with an image mode volume. From a data
perspective, the image mode volumes represent the LUs exactly as they were before the
import operation. The image mode volumes are on the same physical drives of the
external storage controller and the data remains unchanged. The system is presenting
active images of the LUs and acting as a proxy.
򐂰 You must remove the storage system multipath device driver from the host, and then they
host are configured for host attachment with this system. The hosts are defined with
worldwide port names (WWPNs) or iSCSI Qualified Names (IQNs), and the volumes are
mapped to the hosts. After the volumes are mapped, the hosts discover the system’s
volumes through a host rescan or restart operation.
򐂰 IBM Spectrum Virtualize volume mirroring operations are initiated. The image mode
volumes are mirrored to generic volumes. Volume mirroring is an online migration task,
which means a host can still access and use the volumes during the mirror
synchronization process.
򐂰 After the mirror operations are complete, the image mode volumes are removed. The
external storage system LUs are now migrated and the now redundant storage can be
decommissioned or reused elsewhere.

444 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Important: If you are migrating volumes from another IBM Storwize or IBM FlashSystem
family product, the target system must be configured in the replication layer and the source
system must be configured in the storage layer. Otherwise, the source system does not
discover the target as a host, and the target does not discover the source as a back-end
controller.

The default layer setting for Storwize and IBM FlashSystem family systems is storage. For
more information about layers and how to change them, see Chapter 5, “Storage pools” on
page 221.

8.1.1 Interoperability and compatibility


Interoperability is an important consideration when a new storage system is set up in an
environment that contains a storage infrastructure. Before attaching any external storage
systems to the system, see the IBM System Storage Interoperation Center (SSIC).

At the SSIC site, select IBM System Storage Enterprise Flash for IBM FlashSystem 9100
or IBM System Storage Midrange Disk for other hardware platforms like the Storwize family,
and then select the appropriate Storage Controller Support entry for your system as the
Storage Model. You can refine your search by selecting the external storage controller that
you want to use from the Storage Controller menu.

The matrix results indicate the external storage that you want to attach to the system, such as
minimum firmware level or support for disks greater than 2 TB.

8.1.2 Prerequisites
Before the storage migration wizard can be started, the external storage controller must be
visible to the system. You also must confirm that the restrictions, limits, and prerequisites are
met.

Administrators can migrate data from the external storage system to the system that uses
iSCSI or a Fibre Channel connection (FICON).

Common prerequisites
If you have VMware Elastic Sky X (ESX) server hosts, you must change the settings on the
VMware host so that copies of the volumes can be recognized by the system after the
migration completes. To ensure that volume copies can be recognized by the system for
VMware ESX hosts, you must complete one of the following actions:
򐂰 Enable the EnableResignature setting.
򐂰 Disable the DisallowSnapshotLUN setting.

To learn more about these settings, see the documentation for the VMware ESX host.

Note: Test the setting changes on a non-production server. The logical unit number (LUN)
has a different unique identifier (UID) after it is imported. It resembles a mirrored volume to
the VMware server.

Chapter 8. Storage migration 445


Prerequisites for a Fibre Channel connection
The following prerequisites for FICON must be met:
򐂰 Make sure an FC host interface card (HIC) is installed in the node canisters.
򐂰 Cable this system into the storage area network (SAN) of the external storage that you
want to migrate. Ensure that your system is cabled into the same SAN as the external
storage controller that you are migrating.
򐂰 If you are using FC, connect the FC cables to the FC ports in both canisters of your
system, and then to the FC network.
For more information and considerations, see Chapter 2, “Planning” on page 77.
Alternatively, directly attach the external storage controller to the nodes instead of using a
switched fabric.

Prerequisites for iSCSI connections


The following prerequisites for iSCSI connections must be met:
򐂰 Cable this system to the external storage system with a redundant switched fabric.
Migrating iSCSI external storage requires that the system and the storage system are
connected through an Ethernet switch. Symmetric ports on all nodes of the system must
be connected to the same switch and must be configured on the same subnet.
򐂰 In addition, modify the Ethernet port attributes to enable the external storage on the
Ethernet port to enable external storage connectivity. To modify the Ethernet port for
external storage, click Network → Ethernet Ports and right-click a configured port. Select
Modify Storage Ports to enable the port for external storage connections.
򐂰 Cable the Ethernet ports on the storage system to fabric in the same way as the system
and ensure that they are configured in the same subnet. Optionally, you can use a virtual
local area network (VLAN) to define network traffic for the system ports.
򐂰 For full redundancy, configure two Ethernet fabrics with separate Ethernet switches. If the
source system nodes and the external storage system both have more than two Ethernet
ports, extra redundant iSCSI connection can be established for increased throughput.

8.2 Storage migration wizard


The storage migration wizard simplifies the migration task. The wizard features easy-to-follow
windows that guide users through the entire process. The wizard shows you which
commands are being run so that you can see exactly what is being performed throughout the
process.

Attention: The risk of losing data when using the storage migration wizard correctly is low.
However, it is prudent to avoid potential data loss by creating a backup of all the data that is
stored on the hosts, the storage controllers, and the system before the wizard is used.

Complete the following steps to complete the migration by using the storage migration wizard:
1. Select Pools → System Migration, as shown in Figure 8-1 on page 447. The System
Migration pane provides access to the storage migration wizard and displays information
about the migration progress.

446 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 8-1 Navigating to Storage Migration

2. Click Start New Migration to begin the storage migration wizard, as shown in Figure 8-2.

Figure 8-2 Starting a migration

Note: Starting a new migration adds the volume to be migrated to the list that is shown
in Figure 8-2. After a volume is migrated, it remains in the list until you finalize the
migration.

3. If both FC and iSCSI external systems are detected, a dialog box opens and prompts you
about which protocol should be used. Select the type of attachment between the system
and the external controller from which you want to migrate volumes and click Next. If only
one type of attachment is detected, this dialog box does not open.
If the external storage system is not detected, the warning message that is shown in
Figure 8-3 is displayed when you attempt to start the migration wizard. Click Close and
correct the problem before you try to start the migration wizard again.

Figure 8-3 Error message if no external storage is detected

Chapter 8. Storage migration 447


4. When the wizard starts, you are prompted to verify the restrictions and prerequisites that
are listed in Figure 8-4. Address the following restrictions and prerequisites:
– Restrictions:
• You are not using the storage migration wizard to migrate clustered hosts, including
clusters of VMware hosts and Virtual I/O Servers (VIOSs).
• You are not using the storage migration wizard to migrate SAN boot images.
If you have either of these two environments, the migration must be performed outside
of the wizard because more steps are required.
The VMware vSphere Storage vMotion feature might be an alternative for migrating
VMware clusters. For information, see this web page.
– Prerequisites:
• The system and the external storage controller are connected to the same SAN
fabric.
• If there are VMware ESX hosts involved in the data migration, the VMware ESX
hosts are set to allow volume copies to be recognized.
For more information about the Storage Migration prerequisites, see 8.1.2, “Prerequisites”
on page 445.
If all restrictions are satisfied and prerequisites are met, select all of the options and click
Next, as shown in Figure 8-4.

Figure 8-4 Restrictions and prerequisites confirmation

448 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. Prepare the environment migration by following the instructions that are shown in
Figure 8-5.

Figure 8-5 Preparing your environment for storage migration

The preparation phase includes the following steps:


a. Before migrating the storage, ensure that all host operations are stopped to prevent
applications from generating I/Os to the migrated system.
b. Remove all zones between the hosts and the controller you are migrating.
c. Hosts usually do not support concurrent multipath drivers. You might need to remove
drivers that are not compatible with the system from the hosts and use the
recommended device drivers. For more information about supported drivers, see the
SSIC.
d. If you are migrating external storage controllers that connect to the system that uses
FICON, ensure that you complete appropriate zoning changes to simplify migration.

Chapter 8. Storage migration 449


Use the following guidelines to ensure that zones are configured correctly for migration:
• Zoning rules
For every storage controller, create one zone that contains this system’s ports from
every node and all external storage controller ports, unless otherwise stated by the
zoning guidelines for that storage controller.
This system requires single-initiator zoning for all large configurations that contain
more than 64 host objects. Each server FC port must be in its own zone, which
contains the FC port and this system’s ports. In configurations of fewer than 64
hosts, you can have up to 40 FC ports in a host zone if the zone contains similar
host bus adapters (HBAs) and operating systems (OSs).
• Storage system zones
In a storage system zone, this system’s nodes identify the storage systems.
Generally, create one zone for each storage system. Host systems cannot operate
on the storage systems directly. All data transfer occurs through this system’s
nodes.
• Host zones
In the host zone, the host systems can identify and address this system’s nodes.
You can have more than one host zone and more than one storage system zone.
Create one host zone for each host FC port.
Because the system should now be seen as a host from the external controller to be
migrated, you must define the system as a host or host group by using the WWPNs or
IQNs on the system to be migrated. Some controllers do not support LUN-to-host
mapping, so they present all the LUs to the system. In that case, all the LUs should be
migrated.
6. If the previous preparation steps were followed, the system is now seen as a host from the
controller to be migrated. LUs can then be mapped to the system. Map the external
storage controller by following the instructions that are shown in Figure 8-6 on page 451.

450 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 8-6 Steps to map the LUs to be migrated

Before you migrate storage, record the hosts and their WWPNs or IQNs for each volume
that is being migrated and the SCSI LUN when it is mapped to the system.
Table 8-1 shows an example of a table that is used to capture information that relates to
the external storage system LUs.

Table 8-1 Example table for capturing external LU information


Volume Name or ID Hosts accessing Host WWPNs or SCSI LUN when
this LUN IQNs mapped

1 DB2logs DB2server 21000024FF2... 0

2 Db2data DB2Server 21000024FF2... 1

3 file system FileServer1 21000024FF2... 2

Note: Make sure to record the SCSI ID of the LUs to which the host is originally
mapped. Some OSs do not support changing the SCSI ID during the migration.

Click Next and wait for the system to discover external devices.
7. The next window shows all of the MDisks that were found. If the MDisks to be migrated are
not in the list, check your zoning or Internet Protocol (IP) configuration, as applicable, and
your LUN mappings. Repeat the previous step to trigger the discovery procedure again.

Chapter 8. Storage migration 451


Select the MDisks that you want to migrate, as shown in Figure 8-7.

Figure 8-7 Discovering mapped LUs from external storage

In this example, one MDisk (mdisk8) was found and is migrated. Detailed information
about an MDisk is available by double-clicking it. To select multiple elements from the
table, press Shift and then left-click or Ctrl and then left-click. Optionally, you can export
the discovered MDisks list to a comma-separated value (CSV) file for further use by
clicking Export to CSV.

Note: Select only the MDisks that are applicable to the current migration plan. After
step 15 on page 460 of the current migration completes, another migration can be
started to migrate any remaining MDisks.

8. Click Next and wait for the MDisk to be imported. During this task, the system creates a
new storage pool that is called MigrationPool_XXXX and adds the imported MDisk to the
storage pool as image mode volumes.

452 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9. The next page lists all of the hosts that are configured on the system, which you can use to
configure new hosts. This step is optional and can be bypassed by clicking Next. In this
example, the host linuxsrv is already configured, as shown in Figure 8-8. If no host is
selected, you can create a host after the migration completes and map the imported
volumes to it.

Figure 8-8 Listing of configured hosts to which to map the imported volume

Chapter 8. Storage migration 453


[Link] the host that needs access to the migrated data is not configured, select Add Host to
begin the Add Host wizard. Enter the host connection type, name, and connection details.
Optionally, click Advanced to modify the host type and I/O group assignment. Figure 8-9
shows the Add Host wizard with the details completed.
For more information about the Add Host wizard, see Chapter 7, “Hosts” on page 369.

Figure 8-9 Creating a host during the migration process

[Link] Add. The host is created and is now listed in the Configure Hosts window, as shown
in Figure 8-8 on page 453. Click Next to proceed.

454 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
[Link] next window lists the new volumes, where you can map them to hosts, as shown in
Figure 8-10. The volumes are listed with names that were automatically assigned by the
system. The names can be changed to reflect something more meaningful to the user by
selecting the volume and clicking Rename in the Actions menu.

Figure 8-10 Mapping volumes to hosts

Chapter 8. Storage migration 455


[Link] the volumes to the hosts by selecting the volumes and clicking Map to Host or Host
Cluster, as shown in Figure 8-11. This step is optional and can be bypassed by clicking
Next.

Figure 8-11 Selecting the host to which to map the new volume

456 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can manually assign a SCSI ID to the LUNs you are mapping. This technique is useful
when the host needs to have the same LUN ID for a LUN before and after it is migrated. To
assign the SCSI ID manually, select the Self Assign option and follow the instructions as
shown in Figure 8-12.

Figure 8-12 Manually assign a LUN SCSI ID to a mapped volume

Chapter 8. Storage migration 457


When your LUN mapping is ready, click Next. A new dialog box opens with a summary of
the new and existing mappings, as shown in Figure 8-13.

Figure 8-13 Volumes mapping summary before migration

Click Map Volumes and wait for the mappings to be created. Continue to map volumes to
hosts until all mappings are created. Click Next to continue with the next migration step.
[Link] the storage pool into which you want to migrate the imported volumes. Ensure that
the selected storage pool has enough space to accommodate the migrated volumes
before you continue. This step is an optional. You can decide not to migrate to a storage
pool and to leave the imported MDisk as an image mode volume. This technique is not
recommended because no volume mirroring is created. Therefore, no protection is
available for the imported MDisk, and no data transfer occurs from the controller to be
migrated to the system.

458 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Click Next, as shown in Figure 8-14.

Figure 8-14 Selecting the pool to which to migrate the MDisk

The migration starts. This task continues running in the background and uses the volume
mirroring function to place a generic copy of the image mode volumes in the selected
storage pool.

Note: With volume mirroring, the system creates two copies (Copy0 and Copy1) of a
volume. Typically, Copy0 is located in the migration pool, and Copy1 is created in the
target pool of the migration. When the host generates a write I/O on the volume, data is
written concurrently on both copies. Read I/Os are performed on the primary copy only.

In the background, a mirror synchronization of the two copies is performed and runs
until the two copies are synchronized. The speed of this background synchronization
can be changed in the volume properties.

Chapter 8. Storage migration 459


[Link] Finish to end the storage migration wizard, as shown in Figure 8-15.

Figure 8-15 Migration is started

The end of the wizard is not the end of the migration task. You can find the progress of the
migration in the Storage Migration window, as shown in Figure 8-16. The target storage
pool and the progress of the volume copy synchronization is also displayed there.

Figure 8-16 The ongoing migration is listed in the Storage Migration window

[Link] you want to check the progress by using the CLI, run the lsvdisksyncprogress
command because the process is essentially a volume copy, as shown in Example 8-1.

Example 8-1 Migration progress on the CLI


IBM_Storwize:ITSO-V7k:superuser>lsvdisksyncprogress
vdisk_id vdisk_name progress estimated_completion_time
20 legacy_controller_0000000000000002 1 191021123932

460 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
[Link] the migration completes, select all of the migrations that you want to finalize,
right-click the selection, and click Finalize, as shown in Figure 8-17.

Figure 8-17 Finalizing a migration

You are asked to confirm the Finalize action because this process removes the MDisk
from the Migration Pool and deletes the primary copy of the mirrored volume. The
secondary copy remains in the destination pool and becomes the primary. Figure 8-18
shows the confirmation message.

Figure 8-18 Migration finalization confirmation

Chapter 8. Storage migration 461


[Link] finalized, the image mode copies of the volumes are deleted and the associated
MDisks are removed from the migration pool. The status of those MDisks returns to
unmanaged. You can verify the status of the MDisks by selecting Pools → External
Storage, as shown in Figure 8-19. In the example, mdisk3 was migrated and finalized. It
appears as unmanaged in the external storage window.

Figure 8-19 External Storage MDisks window

All the steps that are described in the Storage Migration wizard can be performed
manually, but generally use the wizard as a guide.

Note: For a “real-life” demonstration of the storage migration capabilities that are offered
with IBM Spectrum Virtualize, see this web page (log in required).

The demonstration includes three different step-by-step scenarios showing the integration
of an IBM SAN Volume Controller cluster into an environment with one Microsoft Windows
Server (image mode), one IBM AIX server (logical volume manager (LVM) mirroring), and
one VMware ESXi server (storage vMotion).

462 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9

Chapter 9. Advanced features for storage


efficiency
IBM Spectrum Virtualize running inside a storage system offers several functions for storage
optimization and efficiency. This chapter introduces the basic concepts of those functions,
and also provides a short technical overview and implementation recommendations.

For more information about the planning and configuration of storage efficiency features, see
the following publications:
򐂰 IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521
򐂰 Introduction and Implementation of Data Reduction Pools and Deduplication, SG24-8430

This chapter includes the following topics:


򐂰 9.1, “IBM Easy Tier” on page 464
򐂰 9.2, “Thin-provisioned volumes” on page 482
򐂰 9.3, “UNMAP” on page 485
򐂰 9.4, “Data Reduction Pools” on page 487
򐂰 9.5, “Saving estimations for compression and deduplication” on page 498
򐂰 9.6, “Overprovisioning and data reduction on external storage” on page 500

© Copyright IBM Corp. 2020. All rights reserved. 463


9.1 IBM Easy Tier
IBM Spectrum Virtualize includes the IBM System Storage Easy Tier function, which enables
automated subvolume data placement throughout different storage tiers, and automatically
moves extents within the same storage tier to intelligently align the system with workload
requirements. It also optimizes the use of flash drives or flash arrays.

Many applications exhibit a significant skew in the distribution of I/O workload: A small fraction
of the storage is responsible for a disproportionately large fraction of the total I/O workload of
an environment.

Easy Tier acts to identify this skew and automatically place data to take advantage of it. By
moving the “hottest” data onto the fastest tier of storage, the workload on the remainder of the
storage is reduced. By servicing most of the application workload from the fastest storage,
Easy Tier accelerates application performance and increases overall server utilization, which
can reduce costs regarding servers and application licenses.

Easy Tier also reduces storage cost because the system always places the data with the
highest I/O workload on the fastest tier of storage. Depending on the workload pattern, a
large portion of the capacity can be provided by a lower and less expensive tier without
impacting application performance.

Note: Easy Tier is a licensed function. On IBM FlashSystem 9100 and IBM FlashSystem
7200, it is included in the base code. No actions are required to activate the Easy Tier
license on these systems. On IBM FlashSystem 5100, you must have the appropriate
number of licenses to run Easy Tier.

Without a license, Easy Tier balances I/O workload only between managed disks (MDisks)
in the same tier.

9.1.1 Easy Tier concepts


Easy Tier is a performance optimization function that automatically migrates (or moves)
extents that belong to a volume between different storage tiers based on their I/O load.
Movement of the extents is online and unnoticed from the host point of view. As a result of
extent movement, the volume no longer has all its data in one tier, but rather in two or more
tiers, and each tier provides optimal performance for the extent, as shown in Figure 9-1 on
page 465.

464 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 9-1 Easy Tier

Easy Tier monitors the I/O activity and latency of the extents on all Easy Tier enabled storage
pools. Based on the performance log, it creates an extent migration plan and promotes
(moves) high activity or hot extents to a higher disk tier within the same storage pool. It also
demotes extents whose activity dropped off, or cooled, by moving them from a higher disk tier
MDisk back to a lower tier MDisk.

If a pool contains only MDisks of a single tier, Easy Tier operates only in balancing mode.
Extents are moved between MDisks in the same tier to balance I/O workload within that tier.

Tiers of storage
The MDisks (external logical units (LUs) or redundant array of independent disks (RAID)
arrays) that are presented to the system might have different performance attributes because
of their technology type, such as flash or spinning drives and other characteristics.

The system divides available storage into the following tiers:


򐂰 Storage-class memory (SCM)
The SCM tier is used when the pool contains drives that use persistent memory
technologies that improve the endurance and speed of current flash storage device
technologies.
򐂰 Tier 0 flash
Tier 0 flash drives are high-performance flash drives that use enterprise flash technology.
򐂰 Tier 1 flash
Tier 1 flash drives represent the read-intensive flash drive technology. Tier 1 flash drives
are lower-cost flash drives that typically offer capacities larger than enterprise-class flash,
but have lower performance and write endurance characteristics.

Chapter 9. Advanced features for storage efficiency 465


򐂰 Enterprise tier
The enterprise tier is used when the pool contains MDisks on enterprise-class hard disk
drives (HDDs), which are disk drives that are optimized for performance.
򐂰 Nearline (NL) tier
The NL tier is used when the pool has MDisks on NL-class disks drives that are optimized
for capacity.

The system automatically sets the tier for internal array mode MDisks because it knows the
capabilities of array members, physical drives, and modules. External MDisks need manual
tier assignment when they are added to a storage pool.

Note: The tier of MDisks that is mapped from certain types of IBM System Storage
Enterprise Flash is fixed to tier0_flash, and cannot be changed.

Although the system can distinguish between five tiers, Easy Tier manages only a three-tier
storage architecture within each storage pool. MDisk tiers are mapped to Easy Tier tiers
depending on the pool configuration, as shown in Table 9-1.

Table 9-1 Storage tier to Easy Tier mapping


Configuration Easy Tier top tier Easy Tier middle tier Easy Tier bottom tier

SCM (+ Tier0_Flash) SCM (Tier0_Flash)

SCM + Tier0_Flash SCM Tier0_Flash (Tier1_Flash)


(+ Tier1_Flash)

SCM + Tier0_Flash SCM Tier0_Flash Enterprise + NL


(+ Tier1_Flash) + (+ Tier1_Flash)
Enterprise + NL
(unsupported)

SCM + Tier0_Flash + SCM Tier0_Flash Enterprise/NL


Enterprise/NL

SCM + Tier0_Flash + SCM Tier0_Flash + Enterprise/NL


Tier1_Flash + Tier1_Flash
Enterprise/NL
(unsupported)

SCM + Tier1_Flash SCM Tier1_Flash (Enterprise/NL)


(+ Enterprise/NL)

SCM + Tier1_Flash + SCM Tier1_Flash + NL


Enterprise + NL Enterprise

SCM + Enterprise/NL SCM Enterprise/NL

SCM + Enterprise + SCM Enterprise NL


NL

Tier0_Flash Tier0_Flash (Tier1_Flash)


(+ Tier1_Flash)

Tier0_Flash + Tier0_Flash Tier1_Flash Enterprise/NL


Tier1_Flash +
Enterprise/NL

466 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Configuration Easy Tier top tier Easy Tier middle tier Easy Tier bottom tier

Tier0_Flash + Tier0_Flash Tier1_Flash + NL


Tier1_Flash + Enterprise
Enterprise + NL

Tier0_Flash + Tier0_Flash Enterprise (NL)


Enterprise (+ NL)

Tier0_Flash + NL Tier0_Flash NL

Tier1_Flash Tier1_Flash (Enterprise/NL)


(+ Enterprise/NL)

Tier1_Flash + Tier1_Flash Enterprise NL


Enterprise + NL

Enterprise (+ NL) Enterprise (NL)

NL NL

The table represents all the possible pool configurations. Some entries in the table contain
optional tiers (shown in italic font), but the configurations without the optional tiers are also
valid.

Sometimes, a single Easy Tier tier contains MDisks from more than one storage tier. For
example, consider a pool with SCM, Tier1_Flash, Enterprise, and NL. SCM is the top tier, and
Tier1_Flash and Enterprise share the middle tier. NL is represented by the bottom tier.

Note: Some storage pool configurations with four or more different tiers are not supported.
If such a configuration is detected, an error is logged and Easy Tier enters measure mode,
which means no extent migrations are performed.

For more information about planning and configuration considerations or best practices, see
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521.

Easy Tier automatic data placement


Easy Tier continuously monitors volumes for host I/O activity. It collects performance statistics
for each extent, and derives exponential moving averages for a rolling 24-hour period of I/O
activity. Random and sequential I/O rate, I/O block size and bandwidth for reads and writes,
and I/O response time are collected.

A set of algorithms is used to decide where the extents should be and whether extent
relocation is required. Once per day, Easy Tier analyzes the statistics to determine which data
should be sent to a higher performing tier or a lower tier. Four times per day, it analyzes the
statistics to identify whether any data must be rebalanced between MDisks in the same tier.
Once every 5 minutes, Easy Tier checks the statistics to identify whether any of the MDisks
are overloaded.

Based on this information, Easy Tier generates a migration plan that must be run for optimal
data placement. The system spends the necessary time running the migration plan. The
migration rate is limited to make sure host I/O performance is not affected while data is
relocated.

Chapter 9. Advanced features for storage efficiency 467


The migration plan can consist of the data movement actions on volume extents, as shown in
Figure 9-2. Although each action is shown once, all movement actions can be performed
between any pair of adjacent tiers.

Figure 9-2 Actions on extents

Here are the possible actions:


򐂰 Promote
Active data is moved from a lower tier of storage to a higher tier to improve the overall
system performance.
򐂰 Promote Swap
Active data is moved from a lower tier of storage to a higher tier to improve overall system
performance. Less active data is moved first from the higher tier to the lower tier to make
space.
򐂰 Warm Promote
When an MDisk becomes overloaded, active data is moved from a lower tier to a higher
tier to reduce the workload on the MDisk, which addresses the situation where a lower tier
suddenly becomes very active. Instead of waiting for the next migration plan, Easy Tier
can react immediately. Warm promote acts in a similar way to warm demote. If the
5-minute average performance shows that a layer is overloaded, Easy Tier immediately
starts to promote extents until the condition is relieved.
򐂰 Cold Demote
Inactive or less active data is moved from a higher tier of storage to a lower tier to free
space on the higher tier. Easy Tier automatically frees extents on the higher storage tier
before the extents on the lower tier become hot, which helps the system to be more
responsive to new hot data.
򐂰 Warm Demote
When an MDisk becomes overloaded, active data is moved from a higher tier to a lower
tier to reduce the workload on the MDisk. Easy Tier continuously ensures that the higher
performance tier does not suffer from saturation or overload conditions that might affect
the overall performance in the pool. This action is triggered when bandwidth or
input/output operations per second (IOPS) exceeds a predefined threshold of an MDisk
and causes the movement of selected extents from the higher-performance tier to the
lower-performance tier to prevent MDisk overload.

468 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Balancing Move
Data is moved within the same tier from an MDisk with a higher workload to one with a
lower workload to balance the workload within the tier, which automatically populates new
MDisks that were added to the pool.
򐂰 Balancing Swap
Data is moved within the same tier from an MDisk with higher workload to one with a lower
workload to balance the workload within the tier. Other less active data is moved first to
make space.

Extent migration occurs at a maximum rate of 12 GB every 5 minutes for the entire system. It
prioritizes the following actions:
򐂰 Promote and rebalance get equal priority.
򐂰 Demote is 1 GB every 5 minutes, and then gets whatever is left.

Note: Extent promotion or demotion occurs only between adjacent tiers. In a three-tier
storage pool, Easy Tier does not move extents from the top directly to the bottom tier or
vice versa without moving to the middle tier first.

The Easy Tier overload protection is designed to avoid overloading any type of MDisk with too
much work. To achieve this task, Easy Tier must have an indication of the maximum capability
of a MDisk.

For an array made of locally attached drives, the system can calculate the performance of the
MDisk because it is pre-programmed with performance characteristics for different drives and
array configurations. For a storage area network (SAN)-attached MDisk, the system cannot
calculate the performance capabilities. Therefore, follow the best practice guidelines when
configuring external storage, particularly the ratio between physical disks and MDisks that is
presented to the system.

Each MDisk has an Easy Tier load parameter (low, medium, high, or very_high) that can be
fine-tuned manually. If you analyze the statistics and find that the system does not appear to
be sending enough IOPS to your external MDisk, you can increase the load parameter.

Easy Tier operating modes


Easy Tier includes the following main operating modes:
򐂰 Off
When off, no statistics are recorded and no cross-tier extent migration occurs. Also, with
Easy Tier turned off, no storage pool balancing across MDisks in the same tier is
performed, even in single-tier pools.
򐂰 Evaluation or measurement only
When in this mode, Easy Tier collects only usage statistics for each extent in a storage
pool if it is enabled on both the volume and the pool. No extents are moved. This collection
is typically done for a single-tier pool that contains only HDDs so that the benefits of
adding flash drives to the pool can be evaluated before any major hardware acquisition.
򐂰 Automatic data placement and storage pool balancing
In this mode, usage statistics are collected and extent migration is performed between
tiers (if there is more than one pool in a tier). Also, auto-balance between MDisks in each
tier is performed.

Chapter 9. Advanced features for storage efficiency 469


Note: The auto-balance process automatically balances existing data when MDisks are
added into a pool. However, it does not migrate extents from existing MDisks to achieve
even extent distribution among all old and new MDisks in the storage pool. The Easy
Tier migration plan is based on performance. It is not based on the capacity of the
underlying MDisks or on the number of extents on them.

Implementation considerations
Consider the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the storage system:
򐂰 If the system contains self-compressing drives (IBM FlashCore Module (FCM) drives) in
the top tier of storage in a pool with multiple tiers and Easy Tier is in use, consider setting
an overallocation limit within these pools, as described in “Overallocation limit” on
page 475.
򐂰 Volumes that are added to storage pools use extents from the “middle” tier of three-tier
model, if available. Easy Tier then collects usage statistics to determine which extents to
move to “faster” or “slower” tiers. If there are no free extents in the middle tier, extents from
the other tiers are used (bottom tier if possible, otherwise top tier).
򐂰 When an MDisk with allocated extents is deleted from a storage pool, extents in use are
migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from another tier are used.
򐂰 Easy Tier monitors the extent I/O activity of each copy of a mirrored volume. Easy Tier
works with each copy independently of the other copy. This situation applies to volume
mirroring and IBM HyperSwap and Remote Copy (RC).

Note: Volume mirroring can have different workload characteristics on each copy of the
data because reads are normally directed to the primary copy and writes occur to both
copies. Therefore, the number of extents that Easy Tier migrates between the tiers
might differ for each copy.

򐂰 Easy Tier automatic data placement is not supported on image mode or sequential
volumes. However, it supports evaluation mode for such volumes. I/O monitoring is
supported and statistics are accumulated.
򐂰 When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy
Tier automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even when it is between
pools that both have Easy Tier automatic data placement enabled. Automatic data
placement for the volume is reenabled when the migration is complete.
When the system migrates a volume from one storage pool to another, it attempts to
migrate each extent to an extent in the new storage pool from the same tier as the original
extent, if possible.
򐂰 When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts command on that volume.

470 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9.1.2 Implementing and tuning Easy Tier
The Easy Tier function is enabled by default. It starts monitoring I/O activity immediately after
storage pool and volumes are created, and starts extent migration when the necessary I/O
statistics are collected.

A few parameters can be adjusted. Also, Easy Tier can be turned off on selected volumes in
storage pools.

MDisk settings
The tier for internal (array) MDisks is detected automatically and depends on the type of
drives, which are its members. No adjustments are needed.

For an external MDisk, the tier is assigned when it is added to a storage pool. To assign the
MDisk, select Pools → External Storage, select the MDisk (or MDisks) to add, and click
Assign.

Note: The tier of MDisks that is mapped from certain types of IBM System Storage
Enterprise Flash is fixed to tier0_flash and cannot be changed.

You can choose the target storage pool and storage tier that is assigned, as shown in
Figure 9-3.

Figure 9-3 Choosing the tier when assigning MDisks

Chapter 9. Advanced features for storage efficiency 471


To change the storage tier for an MDisk that is assigned, select Pools → External Storage,
right-click one or more selected MDisks, and choose Modify Tier, as shown in Figure 9-4.

Figure 9-4 Changing the MDisk tier

Note: Assigning a tier to an external MDisk that does not match the physical back-end
storage type is not supported by IBM and can lead to unpredictable consequences.

To determine what tier is assigned to an MDisk, select Pools → External Storage, select
Actions → Customize columns, and select Tier. This action includes the current tier setting
into a list of MDisk parameters that are shown in the External Storage pane. You can also find
this information in MDisk properties. To show this information, right-click MDisk, select
Properties, and click View more details, as shown in Figure 9-5.

Figure 9-5 MDisk properties

472 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To list MDisk parameters with the CLI, run the lsmdisk command. The current tier for each
MDisk is shown. To change the external MDisk tier, run the chmdisk command with the -tier
parameter, as shown in Example 9-1.

Example 9-1 Listing and changing tiers for MDisks (partially shown)
IBM_Storwize:ITSOV7K:superuser>lsmdisk
id name status mode mdisk_grp_id ... tier encrypt
1 mdisk1 online unmanaged ... tier0_flash no
2 mdisk2 online managed 0 ... tier_enterprise no
3 mdisk3 online managed 0 ... tier_enterprise no
<...>
IBM_Storwize:ITSOV7K:superuser>chmdisk -tier tier1_flash mdisk2
IBM_Storwize:ITSOV7K:superuser>

For an external MDisk, the system cannot calculate its exact performance capabilities, so it
has several predefined levels. In rare cases, statistics analysis might show that Easy Tier is
overusing or underusing an MDisk. If so, levels can be adjusted only by using the CLI. Run
chmdisk with the -easytierload parameter. To reset the Easy Tier load to the system default
for the chosen MDisk, use -easytier default, as shown in Example 9-2.

Example 9-2 Changing the Easy Tier load


IBM_Storwize:ITSOV7K:superuser>chmdisk -easytierload default mdisk2
IBM_Storwize:ITSOV7K:superuser>
IBM_Storwize:ITSOV7K:superuser>lsmdisk mdisk2 | grep tier
tier tier_enterprise
easy_tier_load high
IBM_Storwize:ITSOV7K:superuser>

Note: Adjust the Easy Tier load settings only if instructed to do so by IBM Technical
Support or your solution architect.

To list the current Easy Tier load setting of an MDisk, run lsmdisk with the MDisk name or ID
as a parameter.

Storage pool settings


When a storage pool (either standard pool or Data Reduction Pool (DRP)) is created, Easy
Tier is turned on by default. The system automatically enables Easy Tier functions when the
storage pool contains an MDisk from more than one tier. It also enables automatic
rebalancing when the storage pool contains an MDisk from only one tier.

You can disable Easy Tier or switch it to measure-only mode when creating a pool or any
moment later. This task cannot be done with the GUI, but only by using the CLI.

Chapter 9. Advanced features for storage efficiency 473


To check the current Easy Tier function state on a pool, select Pools → Pools, right-click the
selected pool, click Properties, and expand View more details, as shown in Figure 9-6. This
window also shows the amount of data that is stored on each tier.

Figure 9-6 Pool properties

Easy Tier can be in one of the following statuses:


򐂰 Active
Indicates that a pool is being managed by Easy Tier, and extent migrations between tiers
can be performed. Performance-based pool balancing of MDisks in the same tier is also
enabled.
This status is the expected state for a pool with two or more tiers of storage.
򐂰 Balanced
Indicates that a pool is being managed by Easy Tier to provide performance-based pool
balancing of MDisks in the same tier.
This status is the expected state for a pool with a single tier of storage or when no Easy
Tier license is available.
򐂰 Inactive
Indicates that Easy Tier is inactive (turned off).
򐂰 Measured
Shows that Easy Tier statistics are being collected but no extent movement can be
performed.

To find the status of the Easy Tier function on the pools by using the CLI, run the lsmdiskgrp
command without any parameters. To turn off or on Easy Tier, run the chmdiskgrp command,
as shown in Example 9-3. By running lsmdiskgrp with pool name/ID as a parameter, you can
also determine how much storage of each tier is available within the pool.

Example 9-3 Listing and changing the Easy Tier status on pools
IBM_Storwize:ITSOV7K:superuser>lsmdiskgrp
id name status mdisk_count ... easy_tier easy_tier_status
0 TieredPool online 1 ... auto balanced
IBM_Storwize:ITSOV7K:superuser>chmdiskgrp -easytier measure TieredPool

474 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
IBM_Storwize:ITSOV7K:superuser>chmdiskgrp -easytier auto TieredPool
IBM_Storwize:ITSOV7K:superuser>

Overallocation limit
If the system contains self-compressing drives (FCM drives) in the top tier of storage in a pool
with multiple tiers and Easy Tier is in use, consider setting an overallocation limit within
these pools. The overallocation limit has no effect in pools with a different configuration.

Arrays that are created from self-compressing drives have a written capacity limit (virtual
capacity before compression) that is higher than the array’s usable capacity (physical
capacity). Writing highly compressible data to the array means that the written capacity limit
can be reached without running out of usable capacity. However, if data is not compressible or
the compression ratio is low, it is possible to run out of usable capacity before reaching the
written capacity limit of the array, which means the amount of data that is written to a
self-compressing array must be controlled to prevent the array from running out of space.

Without a maximum overallocation limit, Easy Tier scales the usable capacity of the array
based on the actual compression ratio of the data that is stored on the array at a point in time
(PiT). Easy Tier migrates data to the array and might use a large percentage of the usable
capacity in doing so, but it stops migrating to the array when the array comes close to running
out of usable capacity. Then, it might start migrating data away from the array again to free
space.

However, Easy Tier migrates storage only at a slow rate, which might not keep up with
changes to the compression ratio within the tier. When Easy Tier swaps extents or data is
overwritten by hosts, compressible data might be replaced with data that is less compressible,
which increases the amount of usable capacity that is consumed by extents and might result
in self-compressing arrays running out of space, which can cause a loss of access to data
until the condition is resolved.

So, the user might specify the maximum overallocation ratio for pools that contain
self-compressing arrays to prevent out-of-space scenarios. The value acts as a multiplier of
the physically available space in self-compressing arrays. The allowed values are a
percentage in the range of 100% (default) to 400% or off. The default setting allows no
overallocation on new pools. Setting the value to off disables this feature.

When enabled, Easy Tier scales the available usable capacity of self-compressing arrays by
using the specified overallocation limit and adjusts the migration plan to make sure the
fullness of these arrays stays below the maximum overallocation. Specify the maximum
overallocation limit based on the estimated lowest compression ratio of the data that is written
to the pool.

For example, for an estimated compression ratio of 1.2:1, specify an overallocation limit of
120% to put a limit on the overallocation. Easy Tier stops migrating data to self-compressing
arrays in the pool after the written capacity reaches 120% of the physical (usable) capacity of
the array, which is the case even if the written capacity limit of the array is not reached yet or
the current compression ratio of the data that is stored on the array is higher than 1.2:1 (and
thus more usable capacity would be available). This setting prevents changes to the
compression ratio within the specified limits from causing the array to run out of space.

Chapter 9. Advanced features for storage efficiency 475


To modify the maximum overallocation limit of a pool by using the GUI, select Pools → Pools,
right-click a pool, and select Modify Overallocation Limit, as shown in Figure 9-7.

Figure 9-7 Modifying the pool overallocation limit

On the CLI, run the chmdiskgrp command with the -etfcmoverallocationmax parameter to
set a percentage or use off to disable the limit.

Volume settings
By default, each striped-type volume enables Easy Tier to manage its extents. If you need to
fix the volume extent location (for example, to prevent extent demotes and to keep the volume
in the higher-performing tier), you can turn off Easy Tier management for a particular volume
copy.

Note: Thin-provisioned and compressed volumes in a DRP cannot have Easy Tier turned
off. It is possible to turn off Easy Tier only at a pool level.

You can do this task only by using the CLI. Run the lsvdisk command to check and the
chvdisk command to modify the Easy Tier function status on a volume copy, as shown in
Example 9-4.

Example 9-4 Checking and modifying the Easy Tier settings on a volume
IBM_Storwize:ITSO-V7k:superuser>lsvdisk vdisk0 |grep easy_tier
easy_tier on
easy_tier_status balanced
IBM_Storwize:ITSO-V7k:superuser>chvdisk -easytier off vdisk0
IBM_Storwize:ITSOV7K:superuser>

System-wide settings
There is a system-wide setting that is called Easy Tier acceleration that is disabled by default.
Turning it on makes Easy Tier move extents up to four times faster than the default setting. In
acceleration mode, Easy Tier can move up to 48 GiB per 5 minutes, but in normal mode it
moves up to 12 GiB. The following use cases are the most probable use cases for
acceleration:
򐂰 When adding capacity to the pool either by adding to an existing tier or by adding a tier to
the pool, accelerating Easy Tier can quickly spread volumes onto the new MDisks.
򐂰 Migrating the volumes between the storage pools when the target storage pool has more
tiers than the source storage pool, so Easy Tier can quickly promote or demote extents in
the target pool.

476 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Enabling Easy Tier acceleration is advised only during periods of low system activity
only after migrations or storage reconfiguration occurred. It is a best practice to keep off
the Easy Tier acceleration mode during normal system operation to avoid performance
impacts that are caused by accelerated data migrations.

This setting can be changed non-disruptively, but only by using the CLI. To turn on or off Easy
Tier acceleration mode, run the chsystem command. Run the lssystem command to check its
current state, as shown in Example 9-5.

Example 9-5 The chsystem command


IBM_Storwize:ITSOV7K:superuser>lssystem |grep easy_tier
easy_tier_acceleration off
IBM_Storwize:ITSOV7K:superuser>chsystem -easytieracceleration on
IBM_Storwize:ITSOV7K:superuser>

9.1.3 Monitoring Easy Tier activity


When Easy Tier is active, it constantly monitors and records I/O activity, collecting extent heat
data. Heat data files are produced approximately once a day and summarize the activity per
volume since the last heat data file was produced. Easy Tier activity can be monitored by
using the GUI or the external IBM Storage Tier Advisor Tool (STAT) application.

Monitoring Easy Tier by using the GUI


To view the most recent Easy Tier statistics, select Monitoring → Easy Tier Reports. Select
the storage pool that you want to see reports for in the filter section on the left, as shown in
Figure 9-8. It takes approximately 24 hours for reports to be available after turning on Easy
Tier or after a configuration node failover occurred. If no reports are available, the error
message in the figure is shown. In this case, wait until new reports were generated and then
revisit the GUI.

Figure 9-8 Easy Tier reports not available

Three types of reports are available per storage pool: Data Movement, Tier Composition, and
Workload Skew Comparison. Select the corresponding tabs in the GUI to view the charts.
Alternatively, click Export or Export All to download the reports in comma-separated value
(CSV) format.

Chapter 9. Advanced features for storage efficiency 477


Data Movement
The Data Movement report shows the extent migrations that Easy Tier performed to relocate
data between different tiers of storage and within the same tier for optimal performance, as
shown in Figure 9-9. The chart displays the data for the previous 24-hour period, in one-hour
increments. You can change the time span and the increments for a more detailed view.

Figure 9-9 Easy Tier Data Movement chart

The X-axis shows a timeline for the selected period by using the selected increments. The
Y-axis indicates the amount of extent capacity that is moved. For each time increment, a
color-coded bar displays the amount of data that is moved by each Easy Tier data movement
action, such as promote or cold demote. For more information about the different movement
actions, see “Easy Tier automatic data placement” on page 467 or click Movement
Description next to the chart to see an explanation in the GUI.

Tier Composition
The Tier Composition chart shows how different types of workloads are distributed between
top, middle, and bottom tiers of storage in the selected pool, as shown in Figure 9-10 on
page 479.

478 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 9-10 Tier Composition chart

A color-coded bar for each tier shows which workload types are present in that tier and how
much of the extent capacity in that tier to which they can be attributed. Easy Tier distinguishes
between the following workload types. Click Composition Description to show a short
explanation for each workload type in the GUI.
򐂰 Active
Data with more than 0.1 IOPS / Extent access density for small IOPS (< 64 KB block size)
򐂰 Active Large
All data that is not classified above (> 64 KB block size)
򐂰 Low Activity
Data with less than 0.1 IOPS / Extent access density
򐂰 Inactive
Data with zero IOPS / Extent access density (no recent activity)

Chapter 9. Advanced features for storage efficiency 479


Click Easy Tier Mappings to show which MDisks are assigned to which of the three tiers of
Easy Tier, as shown in Figure 9-11. How storage tiers in the system are mapped to Easy Tier
tiers depends on the available storage tiers in the pool. For a list of all possible mappings, see
Table 9-1 on page 466.

Figure 9-11 Easy Tier mappings

Workload Skew Comparison


The Workload Skew Comparison chart displays the percentage of I/O workload that is
attributed to a percentage of the total capacity, as shown in Figure 9-12.

Figure 9-12 Easy Tier Workload Skew Comparison

480 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The X-axis shows the percentage of capacity and the Y-axis shows the corresponding
percentage of workload on that capacity. Workload is classified in small I/O (sum of small
reads and writes) and megabytes per second (MBps) (sum of small and large bandwidth).
The portion of capacity and workload that is attributed to a tier is color-coded in the chart with
a legend above the chart.

Figure 9-12 on page 480 shows that the top tier (Tier1 Flash) contributes only a tiny
percentage of capacity to the pool, but handles around 85% of the IOPS and more than 40%
of the bandwidth in that pool. The middle tier (enterprise disk) handles almost all the
remaining IOPS and an extra 20% of the bandwidth. The bottom tier (NL disk) provides most
of the capacity to the pool but does almost no small I/O workload.

Use this chart to estimate how much storage capacity in the high tiers must be available to
handle most of the workload.

Monitoring Easy Tier by using the IBM Storage Tier Advisor Tool
The IBM STAT is a Windows console application that can analyze heat data files that are
generated by Easy Tier and produce a graphical display of the amount of “hot” data per
volume and predictions of the performance benefits of adding more capacity to a tier in a
storage pool.

Using this method of monitoring, Easy Tier can provide more insights on top of the
information that is available in the GUI.

IBM STAT can be downloaded from this IBM Support web page.

You can download the IBM STAT and install it on your Windows-based computer. The tool is
packaged as an ISO file that must be extracted to a temporary location.

The tool installer is at temporary_location\IMAGES\STAT\Disk1\InstData\NoVM\. By default,


the IBM STAT is installed in C:\Program Files\IBM\STAT\.

On the system, the heat data files are found in the /dumps/easytier directory on the
configuration node and are named dpa_heat.node_panel_name.time_stamp.data. Any heat
data file is erased when it exists for longer than 7 days.

Heat files must be offloaded and IBM STAT started from a Windows command prompt
console with the file specified as a parameter, as shown in Example 9-6.

Example 9-6 Running IBM STAT by using the Windows command prompt
C:\Program Files (x86)\IBM\STAT>stat dpa_heat.[Link]

The IBM STAT creates a set of .html and .csv files that can be used for Easy Tier analysis.

To download a heat data file, select Settings → Support → Support Package → Download
Support Package → Download Existing Package, as shown in Figure 9-13.

Figure 9-13 Downloading an Easy Tier heat file: Download Support Package

Chapter 9. Advanced features for storage efficiency 481


A download window opens that shows all the files in /dumps and its subfolders on a
configuration node. You can filter the list by using the “easytier” keyword, select the dpa_heat
file or files that are analyzed, and clicking Download, as shown in Figure 9-14. Save them in
a convenient location (for example, to a subfolder that holds the IBM STAT executable file).

Figure 9-14 Downloading Easy Tier heat data file: dpa_heat files

You can also specify the output directory. IBM STAT creates a set of HTML files, and the user
can then open the [Link] file in a browser to view the results. Also, the following CSV
files are created and placed in the Data_files directory:
򐂰 <panel_name>_data_movement.csv
򐂰 <panel_name>_skew_curve.csv
򐂰 <panel_name>_workload_ctg.csv

These files can be used as input data for other utilities, such as the IBM STAT Charting Utility.

For more information about how to interpret IBM STAT tool output and CSV files analysis, see
IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521.

9.2 Thin-provisioned volumes


In a shared storage environment, thin provisioning is a method for optimizing the usage of
available storage. It relies on the allocation of capacity on demand instead of the traditional
method of allocating all the capacity at the time of initial provisioning. Using this principle
means that storage environments can achieve higher utilization of physical storage resources
by eliminating the unused allocated capacity.

Traditional storage allocation methods often provision large amounts of storage to individual
hosts, but some of it remains unused (not written to), which might result in poor usage rates
(often as low as 10%) of the underlying physical storage resources. Thin provisioning avoids
this issue by presenting more storage capacity to the hosts than it uses from the storage pool.
Physical storage resources can be expanded over time to respond to growth.

482 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9.2.1 Concepts
The system supports thin-provisioned volumes in standard pools and in DRPs.

Each volume has a provisioned capacity and a real capacity. Provisioned capacity is the
volume storage capacity that is available to a host. It is the capacity that is detected by host
operating systems (OSs) and applications and can be used when creating a file system. Real
capacity is the storage capacity that is reserved to a volume copy from a pool.

In a standard-provisioned volume, the provisioned capacity and real capacity are the same.
However, in a thin-provisioned volume, the provisioned capacity can be much larger than the
real capacity.

The provisioned capacity of a thin-provisioned volume is larger than its real capacity. As more
information is written by the host to the volume, more of the real capacity is used. The system
identifies read operations to unwritten parts of the provisioned capacity and returns zeros to
the server without using any real capacity.

The autoexpand feature prevents a thin-provisioned volume from using up its capacity and
going offline. As a thin-provisioned volume uses capacity, the autoexpand feature maintains a
fixed amount of unused real capacity that is called the contingency capacity. For
thin-provisioned volumes in standard pools, the autoexpand feature can be turned on and off.
For thin-provisioned volumes in DRPs, the autoexpand feature is always enabled.

The capacity of a thin-provisioned volume is split into chunks that are called grains. Write I/O
to grains that have not previously been written to causes real capacity to be used to store
data and metadata. The grain size of thin-provisioned volumes in standard pools can be 32
KB, 64 KB, 128 KB, or 256 KB. Generally, smaller grain sizes save space but require more
metadata access, which can adversely impact performance. When you use thin-provisioning
with IBM FlashCopy, specify the same grain size for the thin-provisioned volume and
FlashCopy. The grain size of thin-provisioned volumes in DRPs cannot be changed from the
default of 8 KB.

A thin-provisioned volume can be converted non-disruptively to a fully allocated volume, or


vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated volume and then remove the fully allocated copy
from the volume after they are synchronized.

The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so


that grains that contain all zeros do not cause any real capacity to be used. Usually, if the
system is to detect zeros on the volume, you must use software on the host side to write
zeros to all unused space on the disk or file system.

9.2.2 Implementation
For more information about creating thin-provisioned volumes, see Chapter 6, “Volumes” on
page 277.

Metadata
In a standard pool, the system uses real capacity to store data that is written to the volume,
and metadata that describes the thin-provisioned configuration of the volume. The metadata
that is required for a thin-provisioned volume is usually less than 0.1% of the provisioned
capacity.

Chapter 9. Advanced features for storage efficiency 483


If the host uses 100% of the provisioned capacity, some extra space is required on your
storage pool to store thin-provisioning metadata. In a worst case, the real size of a
thin-provisioned volume can be 100.1% of its virtual capacity.

In a DRP, metadata for a thin-provisioned volume is stored separately from user data, and is
not reflected in the volume real capacity. Capacity reporting is handled at the pool level.

Volume parameters
When creating a thin-provisioned volume in a standard pool, some of its parameters can be
modified in Custom mode, as shown in Figure 9-15.

Real capacity defines both initial volume real capacity and the amount of contingency
capacity. When autoexpand is enabled, the system tries to maintain the contingency capacity
always by allocating extra real capacity when hosts write to the volume.

The warning threshold can be used to send a notification when the volume is about to run out
of space.

Figure 9-15 Volume parameters for thin-provisioning

In a DRP, fine-tuning of these parameters is not required. The real capacity and warning
threshold are handled at the pool level. The grain size is always 8 KB and autoexpand is
always on.

Host considerations
Do not use defragmentation applications on thin-provisioned volumes. The defragmentation
process can write data to different areas of a volume, which can cause a thin-provisioned
volume to grow to its provisioned size.

484 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9.3 UNMAP
IBM Spectrum Virtualize systems running Version 8.1.0 and later support the Small Computer
System Interface (SCSI) UNMAP command. This command enables hosts to notify the storage
controller of capacity that is no longer required, which can improve capacity savings and
performance of flash storage.

9.3.1 The SCSI UNMAP command


UNMAP is a set of SCSI primitives that enable hosts to indicate to a storage system that space
that is allocated to a range of blocks on a storage volume is no longer required. This
command enables the storage system to take measures and optimize the system so that the
space can be reused for other purposes.

When a host writes to a volume, storage is allocated from the storage pool. To free allocated
space back to the pool, human intervention is needed on the storage system. The SCSI UNMAP
feature is used to allow host OSs to unprovision storage on the storage system, which means
that the resources can automatically be freed in the storage pools and used for other
purposes.

One of the most common use cases is a host application, such as VMware, freeing storage
within a file system. Then, the storage system can reorganize the space, such as optimizing
the data on the volume or the pool so that space can be reclaimed.

A SCSI unmappable volume is a volume that can have storage unprovision and space
reclamation that is triggered by the host OS. The system can pass the SCSI UNMAP command
through to back-end flash storage and external storage controllers that support the function.

9.3.2 Back-end SCSI UNMAP


The system can generate and send SCSI UNMAP commands to specific back-end storage
controllers and internal flash storage when volumes are formatted or deleted, extents are
migrated, or an UNMAP command is received from the host. SCSI UNMAP commands at the time
of writing are sent only to IBM FlashSystem A9000, IBM FlashSystem 900 AE3, IBM
FlashSystem 9100, IBM Storwize and IBM FlashSystem family systems, and Pure Storage
systems.

Back-end SCSI UNMAP commands help prevent an overprovisioned storage controller from
running out of free capacity for write I/O requests, which means that when you use supported
overprovisioned back-end storage, back-end SCSI UNMAP should be enabled.

Flash storage requires empty blocks to serve write I/O requests, which means UNMAP can
improve flash performance by erasing blocks in advance.

This feature is turned on by default. It is a best practice to keep back-end UNMAP enabled,
especially if a system is virtualizing an overprovisioned storage controller or uses FCM
modules.

Chapter 9. Advanced features for storage efficiency 485


To verify that sending UNMAP commands to a back end is enabled, run the lssystem command,
as shown in Example 9-7.

Example 9-7 Verifying the back-end UNMAP support status


IBM_Storwize:ITSOV7K:superuser>lssystem | grep backend_unmap
backend_unmap on

9.3.3 Host SCSI UNMAP


The IBM Spectrum Virtualize system can advertise support for SCSI UNMAP to hosts. Hosts
can then use the set of SCSI UNMAP commands to indicate that formerly used capacity is no
longer required on a volume.

When these volumes are in DRPs, that capacity becomes reclaimable capacity and is
monitored and collected, and eventually redistributed back to the pool for use by the system.
Volumes in standard pools do not support automatic space reclamation after data is
unmapped, and SCSI UNMAP commands are handled as though they were writes with zero
data.

The system also sends SCSI UNMAP commands to back-end controllers that support them if
host unmaps for corresponding blocks are received (and backend UNMAP is enabled).

With host SCSI UNMAP enabled, some host types (for example, Windows, Linux, or VMware)
change their behavior when creating a file system on a volume, issuing SCSI UNMAP
commands to the whole capacity of the volume. The format completes only after all of these
UNMAP commands complete. Some host types run a background process (for example, fstrim
on Linux), which periodically issues SCSI UNMAP commands for regions of a file system that
are no longer required. Hosts might also send UNMAP commands when files are deleted in a
file system.

Host SCSI UNMAP commands drive more I/O workload to back-end storage. In some
circumstances (for example, volumes on a heavily loaded NL-serial-attached SCSI (SAS)
array), this situation can cause an increase in response times on volumes that use the same
storage. Also, host formatting time is likely to increase compared to a system that does not
support the SCSI UNMAP command.

If you use DRPs, an overprovisioned back end that supports UNMAP, or FCM modules, it is a
best practice to turn on SCSI UNMAP support. Host UNMAP support is enabled by default.

If only standard pools are configured and the back end is traditional (fully provisioned),
consider keeping host UNMAP support turned off because it does not provide any benefit.

To check and modify the current settings for host SCSI UNMAP support, run the lssystem and
chsystem CLI commands, as shown in Example 9-8.

Example 9-8 Turning on host UNMAP support


IBM_Storwize:ITSOV7K:superuser>lssystem | grep host_unmap
host_unmap off
IBM_Storwize:ITSOV7K:superuser>chsystem -hostunmap on
IBM_Storwize:ITSOV7K:superuser>

Note: You can switch host UNMAP support on and off nondisruptively on the system side.
However, hosts might need to rediscover storage, or (in the worst case) be restarted for
them to stop sending UNMAP commands.

486 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9.3.4 Offloading I/O throttle
Throttles are a mechanism to control the amount of resources that are used when the system
is processing I/Os on supported objects. If a throttle limit is defined, the system processes the
I/O for that object or delays the processing of the I/O to free resources for more critical I/O
operations.

Offload commands, such as UNMAP and XCOPY, free hosts and speed the copy process by
offloading the operations of certain types of hosts to a storage system. These commands are
used by hosts to format new file systems, or copy volumes without the host needing to read
and then write data.

Offload commands can sometimes create I/O-intensive workloads, potentially taking


bandwidth from production volumes and affecting performance, especially if the underlying
storage cannot handle the amount of I/O that is generated.

Throttles can be used to delay processing for offloads to free bandwidth for other more critical
operations, which can improve performance but limits the rate at which host features, such as
VMware VMotion, can copy data. It can also increase the time that it takes to format file
systems on a host.

Note: For systems that are managing any NL storage, it might be a best practice to set the
offload throttle to 100 MBps.

To implement offload throttle, run the mkthrottle command with the -type offload
parameter. In the GUI, select Monitoring → System, and then click System Actions → Edit
System Offload Throttle, as shown in Figure 9-16.

Figure 9-16 Setting an offload throttle

9.4 Data Reduction Pools


DRPs provide a set of techniques that can be used to reduce the amount of usable capacity
that is required to store data, which helps increase storage efficiency and reduce storage
costs. Available techniques include thin-provisioning, compression, and deduplication.

DRPs automatically reclaim used capacity that is no longer needed by host systems and
return it back to the pool as available capacity for future reuse.

Note: This book provides only an overview of DRP. For more information, see Introduction
and Implementation of Data Reduction Pools and Deduplication, SG24-8430.

Chapter 9. Advanced features for storage efficiency 487


9.4.1 Introduction to DRP
The system can use different data reduction methods simultaneously, which increases the
capacity savings across the entire storage pool.

DRPs support five types of volumes:


򐂰 Fully allocated
This type provides no data reduction.
򐂰 Thin-provisioned
This type provides data reduction by allocating storage on demand when writing to the
volume.
򐂰 Thin and compressed
In addition to being thin-provisioned, data is compressed before being written to storage.
򐂰 Thin and deduplicated
In addition to being thin-provisioned, duplicates of data blocks are detected and replaced
with references to the first copy.
򐂰 Thin, compressed, and deduplicated
This type achieves maximum data reduction by combining thin-provisioning, compression,
and deduplication.

Volumes in a DRP track when capacity is freed from hosts and possible unused capacity that
can be collected and reused within the storage pool. When a host no longer needs the data
that is stored on a volume, the host system uses SCSI UNMAP commands to release that
capacity from the volume. When these volumes are in DRPs, that capacity becomes
reclaimable capacity, and is monitored, collected, and eventually redistributed back to the
pool for use by the system.

Note: If the usable capacity usage of a DRP exceeds more than 85%, I/O performance can
be affected. The system needs 15% of usable capacity available in DRPs to ensure that
capacity reclamation can be performed efficiently.

At its core, a DRP uses a Log Structured Array (LSA) to allocate capacity. An LSA enables a
tree-like directory to define the physical placement of data blocks independent of size and
logical location.

Each volume has a range of logical block addresses (LBAs), starting from 0 and ending with
the block address that fills the capacity. The LSA enables the system to allocate data
sequentially when written to volumes (in any order) and provides a directory that provides a
lookup to match volume LBA with physical addresses within the array. A volume in a DRP
contains directory metadata to store the mapping from logical address on the volume to
physical location on the backend storage.

This directory is too large to store in memory, so it must be read from storage as required.
The lookup and maintenance of this metadata results in I/O amplification. I/O amplification
occurs when a single host-generated read or write I/O results in more than one back-end
storage I/O request. For example, a read I/O request might need to read some directory
metadata in addition to the actual data. A write I/O request might need to read directory
metadata write updated directory metadata, journal metadata, and the actual data.

Conversely, data reduction reduces the size of data that uses compression and deduplication,
so less data is written to the back-end storage.

488 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9.4.2 DRP benefits
DRPs are a new type of storage pool that implement techniques such as thin-provisioning,
compression, and deduplication to reduce the amount of physical capacity that is required to
store data. Savings in storage capacity requirements translate into reduction in the cost of
storing the data.

The cost reductions that are achieved through software can facilitate the transition to all flash
storage. Flash storage has lower operating costs, lower power consumption, higher density,
and is cheaper to cool than disk storage. However, the cost of flash storage is still higher.
Data reduction can reduce the total cost of ownership (TCO) of an all-flash system to be
competitive with HDDs.

One benefit of DRP is in the form of capacity savings that are achieved by deduplication and
compression. Real-time deduplication identifies duplicate data blocks during write I/O
operations and stores a reference to the first copy of the data instead of writing the data to the
storage pool a second time. It does this task by maintaining a fingerprint database containing
hashes of data blocks already written to the pool. If new data that is written by hosts matches
an entry in this database, then a reference is generated in the directory metadata instead of
writing the new data.

Compression reduces the size of the host data that is written to the storage pool. DRP uses
the Lempel-Ziv based real-time compression (RtC) and decompression algorithm. It offers a
new implementation of data compression that is fully integrated into the IBM Spectrum
Virtualize I/O stack. It makes optimal use of node resources such as memory and CPU cores,
and uses hardware acceleration on supported platforms efficiently. DRP compression
operates on small block sizes, which results in consistent and predictable performance.

Deduplication and compression can be combined, in which case data is first deduplicated and
then compressed. Therefore, deduplication references are created on the compressed data
that is stored on the physical domain.

DRP supports end-to-end SCSI UNMAP functions. Hosts use the set of SCSI UNMAP commands
to indicate that the formerly used capacity is no longer required on a target volume.
Reclaimable capacity is unused capacity that is created when data is overwritten, volumes
are deleted, or when data is marked as unneeded by a host by using the SCSI UNMAP
command. That capacity can be collected and reused on the system.

DRPs, the directory, and the actual reduction techniques are designed around optimizing for
flash and future solid-state storage technologies. All metadata operations are 4 KB, which is
ideal for flash storage to maintain low and consistent latency. All data read operations are 8
KB (before reduction) and designed to minimize latency because flash storage is suitable for
small block workload with high IOPS. All write operations are coalesced into
256 KB sequential writes to simplify the garbage collection on flash devices and gain full
stride writes from RAID arrays.

DRP works well with Easy Tier. The directory metadata of DRPs does not fit in memory, so it
is stored on disk by using dedicated metadata volumes that are separate from the actual data.
The metadata volumes are small but frequently accessed by small block I/O requests.
Performance gains are expected because they are optimal candidates for promotion to the
fastest tier of storage through Easy Tier. In contrast, data volumes are large but frequently
rewritten data is grouped to consolidate “heat”. Easy Tier can work as usual to accurately
identify active data.

Chapter 9. Advanced features for storage efficiency 489


9.4.3 Planning for DRP
Before configuring and using DRPs in production environments, you must plan the capacity
and performance. DRP has different performance characteristics than standard pools, so
existing sizing models cannot be used directly without modifications.

For more information about how to estimate the capacity savings that are achieved by
compression and deduplication, see 9.5, “Saving estimations for compression and
deduplication” on page 498.

The following software and hardware requirements are needed for DRP compression and
deduplication:
򐂰 Enabled Compression license, except on IBM FlashSystem 9100 and 7200 systems,
where it is included
򐂰 The system must run Version [Link] or higher

In most cases, it is a best practice to enable compression for all thin-provisioned and
deduplicated volumes. Overhead in DRPs is caused by metadata handling, which is the same
for compressed volumes and thin-provisioned volumes without compression.

If the system contains self-compressing drives, DRPs provide a major benefit only if
deduplication is used and the estimated deduplication savings are significant. If there is no
plan to use deduplication or the expected deduplication ratio is low, consider using fully
allocated volumes instead and use drive compression for capacity savings. For more
information about how to estimate deduplication savings, see 9.5.2, “Evaluating compression
and deduplication” on page 499.

In systems with self-compressing drives, certain system configurations make determining


accurate physical capacity on the system difficult. If the system contains self-compressed
drives and DRPs with thin-provisioned volumes without compression, the system cannot
determine the accurate amount of physical capacity that is used on the system. In this case,
overcommitting and losing access to write operations is possible. To prevent this situation
from happening, use compressed volumes (with or without deduplication) or fully allocated
volumes. Separate compressed volumes and fully allocated volumes by using separate pools.
Similar considerations apply to configurations with compressing back-end storage controllers,
as described in 9.6, “Overprovisioning and data reduction on external storage” on page 500.

There is a maximum number of four DRPs in a system. When this limit is reached, only more
standard pools can be created.

A DRP uses a customer data volume per I/O group to store volume data. There is a limit on
the maximum size of a customer data volume of 128,000 extents per I/O group, which places
a limit on the maximum physical capacity in a pool after data reduction that depends on the
extent size, number of DRPs, and number of I/O groups, as shown in Table 9-2. DRPs have a
minimum extent size of 1024 MB.

Table 9-2 Maximum physical capacity after data reduction


Extent size 1 DRP - 1 I/O group 1 DRP - 4 I/O groups 4 DRP - 4 I/O groups

1024 MB 128 TiB 512 TiB 2 PiB

2048 MB 256 TiB 1 PiB 4 PiB

4096 MB 512 TiB 2 PiB 8 PiB

8192 MB 1 PiB 4 PiB 16 PiB

490 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Overwriting data, unmapping data, and deleting volumes cause reclaimable capacity in the
pool to increase. Garbage collection is performed in the background to convert reclaimable
capacity to available capacity. This operation requires free capacity in the pool to operate
efficiently without impacting I/O performance. A best practice is to ensure that the provisioned
capacity with the DRP does not exceed 85% of the total usable capacity of the DRP.

To ensure that garbage collection is working properly, there is minimum capacity limit in a
single DRP depending on extent size and number of IO groups, as shown in Table 9-3. Even
when there are no volumes in the pool, some of the space is used to store metadata. The
required metadata capacity depends on the total capacity of the storage pool and on the
extent size, which should be considered when planning capacity.

Table 9-3 Minimum capacity in a single Data Reduction Pool


Extent size 1 I/O group 4 I/O groups

1024 MB 255 GiB 1 TiB

2048 MB 0.5 TiB 2 TiB

4096 MB 1 TiB 4 TiB

8192 MB 2 TiB 8 TiB

Note: The default extent size in a DRP is 4 GB. If the estimated total capacity in the pool
exceeds the documented limits, choose a larger extent size. If the estimated total capacity
is relatively small, consider using a smaller extent size for a smaller metadata impact and
lower minimum capacity limit.

For more information about the considerations of using data reduction on the system and the
back-end storage, see 9.6, “Overprovisioning and data reduction on external storage” on
page 500.

9.4.4 Implementing DRP with compression and deduplication


To use all data reduction technologies on the system, you must create a DRP, create volumes
within the DRP, and map these volumes to hosts that support SCSI UNMAP commands. The
implementation process for DRP is like standard pools, but has its own specifics.

Creating pools and volumes


To create a DRP, select Pools → Pools, and select Data Reduction in the Create Pool
dialog. For more information about how to create a storage pool and populate it with MDisks,
see Chapter 5, “Storage pools” on page 221.

To create a volume within a DRP, select Volumes → Volumes, and click Create Volumes.

Chapter 9. Advanced features for storage efficiency 491


Figure 9-17 shows the Create Volumes dialog. In the Capacity Savings menu, the following
selections are available: None, Thin-Provisioned, and Compressed. If Compressed or
Thin-Provisioned is selected, the Deduplicated option also becomes available and can be
selected.

Figure 9-17 Creating a compressed volume

Capacity monitoring
Capacity monitoring in DRPs is mainly done on the system and storage pool levels. Use the
Dashboard in the GUI to view a summary of the capacity usage and capacity savings of the
entire system.

The Pools page in the management GUI is used for reporting on the storage pool level and
displays Usable Capacity and Capacity Details. Usable Capacity indicates the amount of
capacity that is available for storing data on a pool after formatting and RAID techniques are
applied. Capacity Details is the capacity that is available for volumes before any capacity
savings methods are applied. To monitor this capacity, select Pools → Pools, as shown in
Figure 9-18.

Figure 9-18 Data Reduction Pool capacity overview

492 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To see more detailed capacity reporting including the warning threshold and capacity savings,
open the pool properties dialog by right-clicking a pool and selecting Properties. This dialog
shows the savings that are achieved by thin-provisioning, compression, and deduplication,
and the total data reduction savings in the pool, as shown in Figure 9-19. In addition, the
Reclaimable capacity is shown, which is unused capacity that is created when data is
overwritten, volumes are deleted, or when data is marked as unneeded by a host by using the
SCSI UNMAP command. This capacity is converted to available capacity by the garbage
collection background process.

Figure 9-19 Capacity reporting in a Data Reduction Pool

Chapter 9. Advanced features for storage efficiency 493


Thin-provisioned, compressed, and deduplicated volumes do not provide a detailed
per-volume capacity reporting, as shown in the Volumes by Pool pane in Figure 9-20. Only
the Capacity (which is the provisioned capacity that is available to hosts) is shown. Real
capacity, Used capacity, and Compression savings are not applicable for volumes with
capacity savings. Only fully allocated volumes display those parameters.

Figure 9-20 Volumes in a DRP pool

Per-volume compression savings are not visible directly, but they can be accurately estimated
by using the IBM Comprestimator, which is described in 9.5.1, “Evaluating compression
savings by using IBM Comprestimator” on page 498. The IBM Comprestimator can be used
on compressed volumes to analyze the volume level compression savings.

The CLI can be used for limited capacity reporting on the volume level. The
used_capacity_before_reduction entry indicates the total amount of data that is written to a
thin-provisioned or compressed volume copy in a data reduction storage pool before data
reduction occurs. This field is empty for fully allocated volume copies and volume copies not
in a DRP.

To find this value, run the lsvdisk command with a volume name or ID as a parameter, as
shown in Example 9-9. It shows a thin-provisioned volume without compression and
deduplication with a virtual size of 1 TiB that is provisioned to the host. A 53 GB file was
written from the host.

Example 9-9 Data Reduction Pool volume capacity reporting on the CLI
IBM_Storwize:ITSOV7K:superuser>lsvdisk thin_provisioned
id 34
name vdisk1

capacity 1.00TB

used_capacity
real_capacity
free_capacity

tier tier_scm
tier_capacity 0.00MB
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 0.00MB

494 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
compressed_copy no
uncompressed_used_capacity
deduplicated_copy no
used_capacity_before_reduction 53.04GB

The used, real, and free capacity, and the capacity that is stored on each storage tier, is not
shown for volumes (except fully allocated volumes) in DRPs.

Capacity reporting on the pool level is available by running the lsmdiskgrp command with the
pool ID or name as a parameter, as shown in Example 9-10.

Example 9-10 Data Reduction Pool capacity reporting on the CLI


IBM_Storwize:ITSOV7K:superuser>lsmdiskgrp 1 | grep -E "capacity|compression|tier
tier"
capacity 5.00TB
free_capacity 2.87TB
virtual_capacity 4.00TB
used_capacity 1.14TB
real_capacity 1.14TB
tier tier_scm
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier0_flash
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier1_flash
tier_capacity 5.00TB
tier_free_capacity 3.85TB
tier tier_enterprise
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier tier_nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
child_mdisk_grp_capacity 0.00MB
used_capacity_before_reduction 143.68GB
used_capacity_after_reduction 94.64GB
overhead_capacity 52.00GB
deduplication_capacity_saving 36.20GB
reclaimable_capacity 0.00MB
physical_capacity 5.00TB
physical_free_capacity 3.85TB

Compression-related properties are not valid for DRPs.

For more information about every reported value, see lBM Knowledge Center and expand
Command-line interface → Storage pool commands → lsmdiskgrp.

Chapter 9. Advanced features for storage efficiency 495


Migrating to and from a DRP
Data migration from or to a DRP is done by using volume mirroring. A second copy in the
target pool is added to the source volume, and the original copy is optionally removed after
the synchronization process completes. If the volume already has two copies, one of the
copies must be removed or a more complex migration scheme that uses FlashCopy, RC, host
mirroring, or similar must be used.

The migration processes work as follows:


1. To create a second copy, right-click the source volume and choose Add Volume Copy, as
shown in Figure 9-21. Choose the target pool of the migration for the second copy and
select the capacity savings.
2. After you click Add, synchronization starts. The time that synchronization takes to
complete depends on the size of the volume and system performance and the configured
migration rate. You can increase the synchronization rate by right-clicking the volume and
selecting Modify Mirror Sync Rate.

Figure 9-21 Add Volume Copy dialog

When both copies are synchronized, Yes is displayed for both copies in the Synchronized
column in the Volumes pane. You can track the synchronization process by using the
Running tasks pane, as shown in Figure 9-22. After it reaches 100% and the copies are
in-sync, you can complete migration by deleting the source copy.

Figure 9-22 Synchronization progress

496 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Garbage collection and volume deletion
DRP includes built-in capabilities to enable garbage collection of unused storage capacity.
Garbage collection is a DRP process that reduces the amount of data that is stored on
external storage systems and internal drives by reclaiming previously used storage resources
that are no longer needed by host systems.

When a DRP is created, the system monitors the pool for reclaimable capacity from host
UNMAP operations. When space is freed from a host OS, it is a process called unmapping.
Hosts indicate that the allocated capacity is no longer required on a target volume. The freed
space is collected and reused by the system automatically without having to reallocate the
capacity manually.

Removing thin-provisioned or compressed volume copies in a DRP is an asynchronous


operation. Volume copies that are removed from the system enter the deleting state, during
which the used capacity of the copies is converted to reclaimable capacity in the pool by
using a background deletion process. The removal process of deduplicated volume copies
searches and moves deduplication references that other volumes might have to the deleting
volume copies. This task is done to ensure that deduplicated data that was on deleted copies
continues to be available for other volumes in the system.

After this process completes, the volume copies are deleted and disappear from the system
configuration. In a second step, garbage collection can give the reclaimable capacity that is
generated in the first step back to the pool as available capacity, which means that the used
capacity of a removed volume is not available for reuse immediately after the removal.

The time that it takes to delete a thin-provisioned or compressed volume copy depends on the
size of the volume, the system configuration, and the workload. For deduplicated copies, the
duration also depends on the amount and size of other deduplicated copies in the pool, which
means that it might take a long time to delete a small deduplicated copy if there are many
other deduplicated volumes in the same pool. The deletion process is a background process
that does not impact system I/O performance.

The deleting state of a volume or volume copy can be seen by running the lsvdisk command.
The GUI hides volumes in this state, but it shows deleting volume copies if the volume
contains another copy.

Note: Removing thin-provisioned or compressed volume copies in a DRP might take a


long time to complete. Used capacity is not immediately given back to the pool as available
capacity.

When one copy of a mirrored volume is in the deleting state, it is not possible to add a copy to
the volume before the deletion finishes. If a new copy must be added without waiting for the
deletion to complete, first split the copy that must be deleted into a new volume, and then
delete the new volume and add a new second copy to the original volume. To split a copy into
a new volume, right-click the copy and select Split into New Volume in the GUI or run the
splitvdiskcopy command on the CLI.

Chapter 9. Advanced features for storage efficiency 497


9.5 Saving estimations for compression and deduplication
This section provides information about the tools that are used for sizing the environment for
compression and deduplication.

9.5.1 Evaluating compression savings by using IBM Comprestimator


IBM Comprestimator is a utility that estimates the capacity savings that can be achieved
when compression is used for storage volumes. The utility is integrated into the system by
using the GUI and the CLI. It can also be installed and used on host systems.
IBM Comprestimator provides a quick and accurate estimation of compression and
thin-provisioning benefits. The utility performs read-only operations, so it does not affect the
data that is stored on the volume.

If the compression savings prove to be beneficial in your environment, volume mirroring can
be used to convert volumes to compressed volumes.

To see the results and the date of the latest estimation cycle, as shown in Figure 9-23, Go to
the Volumes pane, right-click any volume, and select Space Savings → Estimate
Compression Savings. If no analysis was done, the system suggests running it. A new
estimation of all volumes can be started from this dialog. To run or rerun analysis on a single
volume, select Analyze in the Space Savings submenu.

Figure 9-23 Estimate Compression Savings

To analyze all the volumes on the system from the CLI, run the analyzevdiskbysystem
command.

The command analyzes all the current volumes that are created on the system. Volumes that
are created during or after the analysis are not included and can be analyzed individually. The
time that it takes to analyze all the volumes on system depends on the number of volumes
that are being analyzed, and results can be expected at about a minute per volume. For
example, if a system has 50 volumes, compression savings analysis takes approximately 50
minutes.

You can run an analysis on a single volume by specifying its name or ID as a parameter for
the analyzevdisk CLI command.

To check the progress of the analysis, run the lsvdiskanalysisprogress command. This
command displays the total number of volumes on the system, total number of volumes that
are remaining to be analyzed, and estimated time of completion.

498 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To display information for the thin-provisioning and compression estimation analysis report for
all volumes, run the lsvdiskanalysis command.

If you are using a version of IBM Spectrum Virtualize that is older than Version 7.6 or if you
want to estimate the compression savings of another IBM or non-IBM storage system, the
separate IBM Comprestimator Utility can be installed on a host that is connected to the device
that needs to be analyzed. For more information and the latest version of this utility, see IBM
Comprestimator Utility Version [Link].

Consider the following best practices for using IBM Comprestimator:


򐂰 Run the IBM Comprestimator Utility before implementing an IBM Spectrum Virtualize
solution and DRPs.
򐂰 Download the latest version of the IBM Comprestimator Utility if you are not using one that
is included in your IBM Spectrum Virtualize solution.
򐂰 Use IBM Comprestimator to analyze volumes that contain as much active data as possible
rather than volumes that are nearly empty or newly created to ensure more accuracy
when sizing your environment for compression and DRPs.

Note: IBM Comprestimator can run for a long period (a few hours) when it is scanning a
relatively empty device. The utility randomly selects and reads 256 KB samples from the
device. If the sample is empty (that is, full of null values), it is skipped. A minimum number
of samples with data is required to provide an accurate estimation. When a device is
mostly empty, many random samples are empty. As a result, the utility runs for a longer
time as it tries to gather enough non-empty samples that are required for an accurate
estimate. The scan is stopped if the number of empty samples is over 95%.

9.5.2 Evaluating compression and deduplication


To help with the profiling and analysis of user workloads that are to be migrated to the new
system, IBM provides a highly accurate Data Reduction Estimation Tool (DRET) that supports
both deduplication and compression. The tool operates by scanning target workloads on any
legacy array (from IBM or a third party) and then merging all scan results to provide an
integrated system-level data reduction estimate. It provides a report of what it would expect
the deduplication and compression savings to be from data that is written to an existing disk.

The DRET utility uses advanced mathematical and statistical algorithms to perform an
analysis with a low memory footprint. The utility runs on a host that has access to the devices
to be analyzed. It performs only read operations, so it has no effect on the data that is stored
on the device.

The following sections provide information about installing DRET on a host and using it to
analyze devices on it. Depending on the environment configuration, in many cases DRET is
used on more than one host to analyze more data types.

When DRET is used to analyze a block device that is used by a file system, all underlying
data in the device is analyzed regardless of whether this data belongs to files that were
already deleted from the file system. For example, you can fill a 100 GB file system and make
it 100% used, and then delete all the files in the file system to make it 0% used. When
scanning the block device that is used for storing the file system in this example, DRET
accesses the data that belongs to the files that are deleted.

Chapter 9. Advanced features for storage efficiency 499


Important: The preferred method of using DRET is to analyze volumes that contain as
much active data as possible rather than volumes that are mostly empty of data, which
increase the accuracy level and reduces the risk of analyzing old data that is deleted but
might still have traces on the device.

For more information and the latest version of this utility, see Data Reduction Estimator Tool
Version 1.04.

9.6 Overprovisioning and data reduction on external storage


Starting with IBM Spectrum Virtualize V8.1.x, overprovisioning on selected back-end
controllers is supported, which means that if back-end storage performs data deduplication or
data compression on LUs that are provisioned from it, they still can be used as external
MDisks on the system. However, more configuration and monitoring considerations must be
accounted for.

Overprovisioned MDisks from controllers that are supported by this feature can be used as
managed mode MDisks in the system and can be added to storage pools (including DRPs).

Implementation steps for overprovisioned MDisks are the same as for fully allocated storage
controllers. The system detects whether the MDisk is overprovisioned, its total physical
capacity, and used and remaining physical capacity. It detects whether SCSI UNMAP
commands are supported by the back end. By sending SCSI UNMAP commands to
overprovisioned MDisks, the system marks data that is no longer in use. Then, the garbage
collection processes on the back end can free unused capacity and convert it to free space.

At the time of writing, the following back-end controllers are supported by overprovisioned
MDisks:
򐂰 IBM A9000 V12.1.0 and later
򐂰 IBM FlashSystem 900 V1.4
򐂰 IBM FlashSystem 9000 AE2 expansions
򐂰 IBM Storwize or IBM FlashSystem family systems with code V8.1.0 and later
򐂰 Pure Storage

Extra caution is required when planning and monitoring capacity for such configurations.
Table 9-4 shows an overview of configuration guidelines when using overprovisioned external
storage controllers.

Table 9-4 Using data reduction at two levels


System Back end Comments

DRP Fully allocated Recommended. Use DRP on the system to plan for
compression and deduplication. DRP at the top level
provides the best application capacity reporting.

Fully allocated Overprovisioned, Recommended with appropriate precautions. Track physical


single tier of capacity use carefully to avoid out-of-space conditions.
storage The system can report physical use but does not manage
to avoid out-of-space conditions. There is no visibility of
each application’s use at the system layer. If the back end
runs out of space, there is a limited ability to recover.
Consider creating a sacrificial emergency space volume.

500 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
System Back end Comments

DRP with Overprovisioned Recommended with appropriate precautions. Assume 1:1


compression compression in back-end storage and do not overcommit
capacity in the back end. Small extra savings are
achieved from compressing DRP metadata.

Fully allocated Overprovisioned, Use with great care. Easy Tier is unaware of physical
multiple tiers of capacity in tiers of a hybrid pool, so it tends to fill the top
storage tier with the hottest data. Changes in the compressibility
of data in the top tier can overcommit the storage, which
leads to out-of-space conditions.

DRP with Overprovisioned Avoid. Very difficult to understand the physical capacity
thin-provisioned or use of the uncompressed volumes. High risk of
fully allocated overcommitting the back end. If a mix of DRP and fully
volumes allocated volumes is required, use separate pools.

DRP DRP Avoid. Creates two levels of IO amplification and capacity


impact. DRP at the bottom layer provides no benefit.

When using DRPs with a compressing back-end controller, use compression in DRP and
avoid overcommitting the back end by assuming a 1:1 compression ratio in back-end storage.
Small, extra savings are realized from compressing metadata.

Fully allocated volumes that are above overprovisioned MDisks configurations must be
avoided or used with extreme caution because it can lead to overcommitting back-end
storage.

The concept of provisioning groups is used for capacity reporting and monitoring of
overprovisioned external storage controllers. A provisioning group is an object that represents
a set of MDisks that share physical resources. Each overprovisioned MDisk is part of a
provisioning group that defines the physical storage resources that are available to a set of
MDisks.

Storage controllers report the usable capacity of an overprovisioned MDisk based on its
provisioning group. If multiple MDisks are part of the same provisioning group, then all these
MDisks share the physical storage resources and report the same usable capacity. However,
this usable capacity is not available to each MDisk individually because it is shared between
all these MDisks.

Provisioning groups are used differently depending on the back-end storage, as shown in the
following examples:
򐂰 IBM FlashSystem A9000 and IBM FlashSystem 900: The entire subsystem forms one
provisioning group.
򐂰 Storwize family systems: The storage pool forms a provisioning group, which enables
more than one independent provisioning group in a system.
򐂰 RAID with compressing drives: An array is a provisioning group that presents the physical
storage that is in use much like an external array.

Capacity usage should primarily be monitored on the overprovisioned back-end storage


controller itself.

Chapter 9. Advanced features for storage efficiency 501


From the system, capacity usage can be monitored on overprovisioned MDisks by using one
of the following methods:
򐂰 The GUI dashboard, as shown in Figure 9-24.

Figure 9-24 Dashboard button for overprovisioned storage monitoring

Click Overprovisioned External Storage to show an overview of overprovisioned MDisks


and provisioning groups that are used by the system, as shown in Figure 9-25.

Figure 9-25 View Provisioning Groups

502 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 The MDisk properties window, which opens by selecting Pools → MDisks by Pools,
right-clicking an MDisk, and then selecting the Properties option, as shown in
Figure 9-26.

Figure 9-26 Thin-provisioned MDisk properties

򐂰 The lsmdisk command with an MDisk name or ID as a parameter, as shown in


Example 9-11.

Example 9-11 The lsmdisk parameters for thin-provisioned MDisks


IBM_Storwize:ITSOV7K:superuser>lsmdisk mdisk2
id 2
name mdisk2
status online
mode managed
<...>
dedupe no
<...>
over_provisioned yes
supports_unmap yes
provisioning_group_id
physical_capacity 299.00GB
physical_free_capacity 288.00GB
write_protected no
allocated_capacity 11.00GB
effective_used_capacity 300.00GB

The overprovisioning status and SCSI UNMAP support for the selected MDisk are displayed.

Chapter 9. Advanced features for storage efficiency 503


The physical_capacity and physical_free_capacity parameters belong to the MDisk’s
provisioning group. They indicate the total physical storage capacity and formatted
available physical space in the provisioning group that contains this MDisk.

Note: It is not recommended to create multiple storage pools from MDisks in a single
provisioning group.

504 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10

Chapter 10. Advanced Copy Services


This chapter describes the Advanced Copy Services, which are a group of functions that
provide different methods of data copy. The storage software capabilities to support the
interaction with hybrid clouds also are described. These functions are enabled by IBM
Spectrum Virtualize Software.

This chapter includes the following topics:


򐂰 10.1, “IBM FlashCopy” on page 506
򐂰 10.2, “Managing FlashCopy by using the GUI” on page 536
򐂰 10.3, “Transparent Cloud Tiering” on page 576
򐂰 10.4, “Implementing Transparent Cloud Tiering” on page 578
򐂰 10.5, “Volume mirroring and migration options” on page 590
򐂰 10.6, “Remote Copy services” on page 591
򐂰 10.7, “Remote Copy commands” on page 620
򐂰 10.8, “Native IP replication” on page 627
򐂰 10.9, “Managing Remote Copy by using the GUI” on page 648
򐂰 10.10, “Remote Copy memory allocation” on page 685
򐂰 10.11, “Troubleshooting Remote Copy” on page 686

© Copyright IBM Corp. 2020. All rights reserved. 505


10.1 IBM FlashCopy
Through the FlashCopy function of IBM Spectrum Virtualize, you can perform a point-in-time
(PiT) copy of one or more volumes. This section describes the inner workings of FlashCopy
and provides details about its configuration and use.

You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system (OS) and its cache. Therefore, the copy is not
apparent to the host unless it is mapped.

While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap, after which I/O can resume. Although several FlashCopy options
require the data to be copied from the source to the target in the background, which can take
time to complete, the resulting data on the target volume is presented so that the copy
appears to complete immediately. This feature means that the copy can immediately be
mapped to a host and is directly accessible for read and write operations.

10.1.1 Business requirements for FlashCopy


When you are deciding whether FlashCopy addresses your needs, you must adopt a
combined business and technical view of the problems that you want to solve. First,
determine the needs from a business perspective. Then, determine whether FlashCopy can
address the technical needs of those business requirements.

The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
򐂰 Rapidly creating consistent backups of dynamically changing data.
򐂰 Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts.
򐂰 Rapidly creating copies of production data sets for application development and testing,
auditing purposes and data mining, and quality assurance.

Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to several scenarios.

Backup improvements with FlashCopy


FlashCopy does not reduce the time that it takes to perform a backup to a traditional backup
infrastructure. However, it can be used to minimize, and under certain conditions, eliminate
application downtime that is associated with performing backups. FlashCopy can also transfer
the resource usage of performing intensive backups from production systems.

After the FlashCopy is performed, the resulting image of the data can be backed up to tape as
though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. Using these
methods puts less load on your servers’ infrastructure.

When FlashCopy is used for backup purposes, the target data is managed as read-only at the
OS level. This approach provides extra security by ensuring that your target data was not
modified and remains true to the source.

506 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Restoring with FlashCopy
FlashCopy can perform a restore from any FlashCopy mapping. Therefore, you can restore
(or copy) from the target to the source of your regular FlashCopy relationships. When
restoring data from FlashCopy, this method can be qualified as reversing the direction of the
FlashCopy mappings.

This capability has the following benefits:


򐂰 There is no need to be concerned about pairing mistakes. You trigger a restore.
򐂰 The process appears instantaneous.
򐂰 You can maintain a pristine image of your data while you are restoring what was the
primary data.

This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.

Best practices: Although restoring from a FlashCopy is quicker than a traditional tape
media restore, you must not use restoring from a FlashCopy as a substitute for good
backup/archiving practices. Instead, keep one to several iterations of your FlashCopy
copies so that you can near-instantly recover your data from the most recent history, and
keep your long-term backup/archive for your business.

In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files. To do that task, you must make the target available on a host. As a best
practice, do not make the target available to the source host because seeing duplicates of
disks causes problems for most host OSs. Copy the files to the source by using normal host
data copy methods for your environment.

For more information about how to use reverse FlashCopy, see 10.1.12, “Reverse FlashCopy”
on page 527.

Moving and migrating data with FlashCopy


FlashCopy can be used to facilitate the movement or migration of data between hosts while
minimizing downtime for applications. By using FlashCopy, application data can be copied
from source volumes to new target volumes while applications remain online. After the
volumes are fully copied and synchronized, the application can be brought down and then
immediately brought back up on the new server that is accessing the new FlashCopy target
volumes.

This method differs from the other migration methods, which are described later in this
chapter. A common use for this capability is host hardware refresh.

Application testing with FlashCopy


It is important to test a new version of an application or OS that is using production data. This
testing ensures the highest quality possible for your environment. FlashCopy makes this type
of testing easy to accomplish without putting the production data at risk or requiring downtime
to create a constant copy.

You can create a FlashCopy of your source and use it for your testing. This copy is a duplicate
of your production data down to the block level so that even physical disk identifiers are
copied. Therefore, it is impossible for your applications to tell the difference.

Chapter 10. Advanced Copy Services 507


You can also use the FlashCopy feature to create restart points for long running batch jobs.
This option means that if a batch job fails several days into its run, it might be possible to
restart the job from a saved copy of its data rather than rerunning the entire multiday job.

10.1.2 FlashCopy principles and terminology


The FlashCopy function creates a PiT or time-zero (T0) copy of data that is stored on a source
volume to a target volume by using a Copy on Write (CoW) and copy on-demand mechanism.

When a FlashCopy operation starts, a checkpoint creates a bitmap table that indicates that
no part of the source volume was copied. Each bit in the bitmap table represents one region
of the source volume and its corresponding region on the target volume. Each region is called
a grain.

The relationship between two volumes defines the way data are copied and is called a
FlashCopy mapping.
FlashCopy mappings between multiple volumes can be grouped in a consistency group to
ensure that their T0 is identical for all of them. A simple one-to-one FlashCopy mapping does
not need to belong to a consistency group.

Figure 10-1 shows the basic terms that are used with FlashCopy. All elements are explained
in this chapter.

Figure 10-1 FlashCopy terminology

10.1.3 FlashCopy mapping


The relationship between the source volume and the target volume is defined by a FlashCopy
mapping. The FlashCopy mapping can have three different types, four attributes, and seven
different states.

The FlashCopy mapping can be one of the following types:


򐂰 Snapshot: Sometimes referred to as nocopy. A T0 copy of a volume without a background
copy of the data from the source volume to the target. Only the changed blocks on the
source volume are copied. The target copy cannot be used without an active link to the
source.
򐂰 Clone: Sometimes referred to as full copy. A PiT copy of a volume with background copy
of the data from the source volume to the target. All blocks from the source volume are
copied to the target volume. The target copy becomes a usable independent volume.

508 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Backup: Sometimes referred to as incremental. A backup FlashCopy mapping consists of
a PiT full copy of a source volume, plus periodic increments or “deltas” of data that
changed between two PiTs.

The FlashCopy mapping has four property attributes (clean rate, copy rate, autodelete, and
incremental) and seven different states that are described in “FlashCopy mapping states” on
page 521. Users can perform these actions on a FlashCopy mapping:
򐂰 Create: Define a source and target, and set the properties of the mapping.
򐂰 Prepare: The system must be prepared before a FlashCopy copy starts. It basically
flushes the cache and makes it “transparent” for a short time, so no data is lost.
򐂰 Start: The FlashCopy mapping is started and the copy begins immediately. The target
volume is immediately accessible.
򐂰 Stop: The FlashCopy mapping is stopped (by the system or by the user). Depending on
the state of the mapping, the target volume is usable or not.
򐂰 Modify: Some properties of the FlashCopy mapping can be modified after creation.
򐂰 Delete: Delete the FlashCopy mapping. This action does not delete volumes (source or
target) from the mapping.

The source and target volumes must be the same size. The minimum granularity that
IBM Spectrum Virtualize supports for FlashCopy is an entire volume. It is not possible to use
FlashCopy to copy only part of a volume.

Important: As with any T0 copy technology, you are bound by OS and application
requirements for interdependent data and the restriction to an entire volume.

The source and target volumes must belong to the same IBM Spectrum Virtualize system, but
they do not have to be in the same I/O group or storage pool.

Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.

All FlashCopy operations occur on FlashCopy mappings. FlashCopy does not alter the
volumes. However, multiple operations can occur concurrently on multiple FlashCopy
mappings by using consistency groups.

10.1.4 Consistency groups


To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the IBM Spectrum Virtualize supports the
concept of consistency groups.

Consistency groups address the requirement to preserve PiT data consistency across
multiple volumes for applications that include related data that spans multiple volumes. For
these volumes, consistency groups maintain the integrity of the FlashCopy by ensuring that
“dependent writes” are run in the application’s intended sequence. Also, consistency groups
provide an easy way to manage several mappings.

FlashCopy mappings can be part of a consistency group, even if there is only one mapping in
the consistency group. If a FlashCopy mapping is not part of any consistency group, it is
referred to as stand-alone.

Chapter 10. Advanced Copy Services 509


Dependent writes
It is crucial to use consistency groups when a data set spans multiple volumes. Consider the
following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3, but not 2 because the FlashCopy of the
database volume occurred before the write completed.

In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.

Most of the actions that the user can perform on a FlashCopy mapping are the same for
consistency groups.

10.1.5 Crash-consistent copy and hosts considerations


FlashCopy consistency groups do not provide application consistency, but only ensure that
volume PiTs are consistent between them.

Because FlashCopy is at the block level, it is necessary to understand the interaction


between your application and the host OS. From a logical standpoint, it is easiest to think of
these objects as “layers” that sit on top of one another. The application is the topmost layer,
and beneath it is the OS layer.

Both of these layers have various levels and methods of caching data to provide better speed.
Because the storage device and FlashCopy sit below these layers, they are unaware of the
cache at the application or OS layers.

To ensure the integrity of the copy that is made, it is necessary to flush the host OS and
application cache for any outstanding reads or writes before the FlashCopy operation is
performed. Failing to flush the host OS and application cache produces what is referred to as
a crash-consistent copy.

The resulting copy requires the same type of recovery procedure, such as log replay and file
system checks, that is required after a host crash. FlashCopy copies that are crash-consistent
often can be used after file system and application recovery procedures.

Various OSs and applications provide facilities to stop I/O operations and ensure that all data
is flushed from the host cache. If these facilities are available, they can be used to prepare for
a FlashCopy operation. When this type of facility is unavailable, the host cache must be
flushed manually by quiescing the application and unmounting the file system or drives.

510 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The target volumes are overwritten with a complete image of the source volumes. Before the
FlashCopy mappings are started, any data that is held on the host OS (or application) caches
for the target volumes must be discarded. The easiest way to ensure that no data is held in
these caches is to unmount the target volumes before the FlashCopy operation starts.

Best practice: From a practical standpoint, when you have an application that is backed
by a database and you want to make a FlashCopy of that application’s data, it is sufficient
in most cases to use the write-suspend method that is available in most modern
databases. This situation is possible because the database maintains strict control over
I/O.

This method is opposed to flushing data from both the application and the backing
database, which is always the suggested method because it is safer. However, this method
can be used when facilities do not exist or your environment includes time sensitivity.

IBM Spectrum Protect Snapshot


FlashCopy is not application-aware, so a third-party tool is needed to link the application to
the FlashCopy operations.

IBM Spectrum Protect Snapshot protects data with integrated, application-aware snapshot
backup and restore capabilities that use FlashCopy technologies in IBM Spectrum Virtualize.

You can protect data that is stored by IBM Db2® for SAP, Oracle, Microsoft Exchange, and
Microsoft SQL Server applications. You can create and manage volume-level snapshots for
file systems and custom applications.

In addition, it enables you to manage frequent, near-instant, nondisruptive, and


application-aware backups and restores that use integrated application and VMware
snapshot technologies. IBM Spectrum Protect Snapshot can be widely used in both IBM and
non-IBM storage systems.

Note: For more information about how IBM Spectrum Protect Snapshot can help your
business, see IBM Knowledge Center.

10.1.6 Grains and bitmap: I/O indirection


When a FlashCopy operation starts, a checkpoint is made of the source volume. No data is
copied when a start operation occurs. Instead, the checkpoint creates a bitmap that indicates
that no part of the source volume was copied. Each bit in the bitmap represents one region of
the source volume. Each region is called a grain.

Chapter 10. Advanced Copy Services 511


You can think of the bitmap as a simple table of ones or zeros. The table tracks the difference
between a source volume grains and a target volume grains. At the creation of the FlashCopy
mapping, the table is filled with zeros, which indicates that no grain is copied yet. When a
grain is copied from source to target, the region of the bitmap referring to that grain is updated
(for example, from “0” to “1”), as shown in Figure 10-2.

Figure 10-2 A simplified representation of grains and a bitmap

The grain size can be 64 KB or 256 KB. The default is 256 KB. The grain size cannot be
selected by the user when creating a FlashCopy mapping from the GUI. The FlashCopy
bitmap contains 1 bit for each grain. The bit records whether the associated grain is split by
copying the grain from the source to the target.

After a FlashCopy mapping is created, the grain size for that FlashCopy mapping cannot be
changed. When a FlashCopy mapping is created, if the grain size parameter is not specified
and one of the volumes is part of a FlashCopy mapping, the grain size of that mapping is
used.

If neither volume in the new mapping is part of another FlashCopy mapping, and at least one
of the volumes in the mapping is a compressed volume, the default grain size is 64 KB for
performance considerations. But other than in this situation, the default grain size is 256 KB.

Copy on Write and Copy on Demand


IBM Spectrum Virtualize FlashCopy uses the CoW mechanism to copy data from a source
volume to a target volume.

As shown in Figure 10-3 on page 513, when data is written on a source volume, the grain
where the to-be-changed blocks are stored is first copied to the target volume and then
modified on the source volume. The bitmap is updated to track the copy.

512 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-3 Copy on Write steps

With FlashCopy, the target volume is immediately accessible for both read and write
operations. Therefore, a target volume can be modified, even if it is part of a FlashCopy
mapping. As shown in Figure 10-4, when a write operation is performed on the target volume,
the grain that contains the blocks to be changed is first copied from the source (Copy on
Demand). It is then modified with the new value. The bitmap is modified so the grain from the
source is not copied again, even if it is changed or if a background copy is enabled.

Figure 10-4 Copy on Demand steps

Note: If all the blocks of the grain to be modified are changed, there is no need to copy the
source grain first. There is no copy on demand, and it is directly modified.

FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to the source and target volumes when a
FlashCopy mapping is started, which is done by using the FlashCopy bitmap. The purpose of
the FlashCopy indirection layer is to enable the source and target volumes for read and write
I/O immediately after the FlashCopy is started.

The indirection layer intercepts any I/O coming from a host (read or write operation) and
addressed to a FlashCopy volume (source or target). It determines whether the addressed
volume is a source or a target, its direction (read or write), and the state of the bitmap table for
the FlashCopy mapping that the addressed volume is in. Then, it decides what operation to
perform.

Chapter 10. Advanced Copy Services 513


The following subsections show the different I/O indirections.

Reading from the source volume


When a user performs a read operation on the source volume, there is no redirection. The
operation is like what is done with a volume that is not part of a FlashCopy mapping.

Writing on the source volume


Performing a write operation on the source volume modifies a block or a set of blocks, which
modifies a grain on the source. It generates one of the following actions, depending on the
state of the grain to be modified:
򐂰 If the bitmap indicates that the grain was copied, the source grain is changed and the
target volume and the bitmap table remain unchanged, as shown in Figure 10-5.

Figure 10-5 Modifying an already copied grain on the source

򐂰 If the bitmap indicates that the grain was not yet copied, the grain is first copied on the
target (CoW), the bitmap table is updated, and the grain is modified on the source, as
shown in Figure 10-6.

Figure 10-6 Modifying a non-copied grain on the source

514 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Writing on a target volume
Because FlashCopy target volumes are immediately accessible in read and write mode, it is
possible to perform write operations on the target volume when the FlashCopy mapping is
started.

Performing a write operation on the target generates one of the following actions, depending
on the bitmap:
򐂰 If the bitmap indicates the grain to be modified on the target is not yet copied, it is first
copied from the source (copy on demand). The bitmap is updated, and the grain is
modified on the target with the new value, as shown in Figure 10-7. The source volume
remains unchanged.

Figure 10-7 Modifying a non-copied grain on the target

Note: If the entire grain is to be modified and not only part of it (some blocks only), the
copy on demand is bypassed. The bitmap is updated, and the grain on the target is
modified but not copied first.

Chapter 10. Advanced Copy Services 515


򐂰 If the bitmap indicates the grain to be modified on the target was copied, it is directly
changed. The bitmap is not updated, and the grain is modified on the target with the new
value, as shown in Figure 10-8.

Figure 10-8 Modifying an already copied grain on the target

Note: The bitmap is not updated in this case. Otherwise, it might be copied from the
source later if a background copy is ongoing or if write operations are made on the source.
That process overwrites the changed grain on the target.

Reading from a target volume


Performing a read operation on the target volume returns the value in the grain on the source
or on the target, depending on the bitmap:
򐂰 If the bitmap indicates that the grain was copied from the source or that the grain was
modified on the target, the grain on the target is read, as shown in Figure 10-9.
򐂰 If the bitmap indicates that the grain was not yet copied from the source or was not
modified on the target, the grain on the source is read, as shown in Figure 10-9.

Figure 10-9 Reading a grain on a target

If the source has multiple targets, the indirection layer algorithm behaves differently on target
I/Os. For more information about multi-target operations, see 10.1.11, “Multiple target
FlashCopy” on page 522.

516 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.1.7 Interaction with cache
IBM Spectrum Virtualize-based systems have their cache divided into upper and lower cache.
Upper cache serves mostly as write cache and hides the write latency from the hosts and
application. Lower cache is a read/write cache and optimizes I/O to and from disks.
Figure 10-10 shows the IBM Spectrum Virtualize cache architecture.

Figure 10-10 New cache architecture

This CoW process introduces latency into write operations. To isolate the active application
from this extra latency, the FlashCopy indirection layer is placed logically between the upper
and lower cache. Therefore, the extra latency that is introduced by the CoW process is
encountered only by the internal cache operations and not by the application.

The two-level cache provides more performance improvements to the FlashCopy mechanism.
Because the FlashCopy layer is above the lower cache in the IBM Spectrum Virtualize
Software stack, it can benefit from read prefetching and coalescing writes to back-end
storage. FlashCopy benefits from the two-level cache because upper cache write data does
not have to go directly to back-end storage but to the lower cache layer instead.

10.1.8 Background Copy Rate


The Background Copy Rate is a property of a FlashCopy mapping. A grain copy from the
source to the target can occur when triggered by a write operation on the source or target
volume, or when background copy is enabled. With background copy enabled, the target
volume eventually becomes a clone of the source volume at the time the mapping was started
(T0). When the copy completes, the mapping can be removed between the two volumes and
you can end up with two independent volumes.

Chapter 10. Advanced Copy Services 517


The background copy rate property determines the speed at which grains are copied as a
background operation immediately after the FlashCopy mapping is started. That speed is
defined by the user when creating the FlashCopy mapping, and can be changed dynamically
for each individual mapping whatever its state. Mapping copy rate values can be 0 - 150, with
the corresponding speeds that are listed in Table 10-1.

When the background copy function is not performed (copy rate = 0), the target volume
remains a valid copy of the source data only while the FlashCopy mapping remains in place.

Table 10-1 Copy rate values


User-specified copy rate attribute Data 256 KB 64 KB
value copied/sec grains/sec grains/sec

1 - 10 128 kibibytes 0.5 2


(KiB)

11 - 20 256 KiB 1 4

21 - 30 512 KiB 2 8

31 - 40 1 mebibyte 4 16
(MiB)

41 - 50 2 MiB 8 32

51 - 60 4 MiB 16 64

61 - 70 8 MiB 32 128

71 - 80 16 MiB 64 256

81 - 90 32 MiB 128 512

91 - 100 64 MiB 256 1024

101 - 110 128 MiB 512 2048

111 - 120 256 MiB 1024 4096

121 - 130 512 MiB 2048 8192

131 - 140 1 GiB 4096 16384

141 - 150 2 GiB 8192 32768

The grains per second numbers represent the maximum number of grains that IBM Spectrum
Virtualize copies per second. This amount assumes that the bandwidth to the managed disks
(MDisks) can accommodate this rate.

If IBM Spectrum Virtualize cannot achieve these copy rates because of insufficient bandwidth
from the nodes to the MDisks, the background copy I/O contends for resources on an equal
basis with the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving
from the hosts tend to see an increase in latency and a consequential reduction in
throughput.

Background copy and foreground I/O continue to progress, and do not stop, hang, or cause
the node to fail.

The background copy is performed by one of the nodes that belong to the I/O group in which
the source volume is. This responsibility is moved to the other node in the I/O group if the
node that performs the background and stopping copy fails.

518 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.1.9 Incremental FlashCopy
When a FlashCopy mapping is stopped, either because the entire source volume was copied
onto the target volume or because a user manually stopped it, the bitmap table is reset.
Therefore, when the same FlashCopy is started again, the copy process is restarted from the
beginning.

Using the -incremental option when creating the FlashCopy mapping enables the system to
keep the bitmap as it is when the mapping is stopped. Therefore, when the mapping is started
again (at another PiT), the bitmap is reused and only changes between the two copies are
applied to the target.

A system that provides the Incremental FlashCopy capability enables the system
administrator to refresh a target volume without having to wait for a full copy of the source
volume to be complete. At the point of refreshing the target volume, for a particular grain, if
the data changed on the source or target volumes, the grain from the source volume is copied
to the target.

The advantages of Incremental FlashCopy are useful only if a previous full copy of the source
volume was obtained. Incremental FlashCopy helps only with further recovery time objectives
(RTOs, which are the time that is needed to recover data from a previous state), and it does
not help with the initial RTO.

For example, as shown in Figure 10-11, a FlashCopy mapping was defined between a source
volume and a target volume with the -incremental option.

Figure 10-11 Incremental FlashCopy example

Chapter 10. Advanced Copy Services 519


Consider the following points:
򐂰 The mapping is started on the Copy1 date. A full copy of the source volume is made, and
the bitmap is updated every time that a grain is copied. At the end of Copy1, all grains
were copied and the target volume is a replica of the source volume at the beginning of
Copy1. Although the mapping stopped (because of the -incremental option), the bitmap
is maintained.
򐂰 Changes are made on the source volume and the bitmap is updated, although the
FlashCopy mapping is not active. For example, grains E and C on the source are changed
in G and H, and their corresponding bits are changed in the bitmap. The target volume is
untouched.
򐂰 The mapping is started again on the Copy2 date. The bitmap indicates that only grains E
and C were changed, so only G and H are copied on the target volume. There is no need
to copy the other grains because they were already copied the first time. The copy time is
much quicker than for the first copy because only a fraction of the source volume is copied.

10.1.10 Starting FlashCopy mappings and consistency groups


You can perform the actions of preparing, starting, or stopping FlashCopy on a stand-alone
mapping or a consistency group.

When using the CLI to perform FlashCopy on volumes before you start a FlashCopy
(regardless of the type and options specified), run a prestartfcmap or prestartfcconsistgrp
command. These commands put the cache into write-through mode and flush the I/O that is
bound for your volume. After FlashCopy starts, an effective copy of a source volume to a
target volume is created.

The content of the source volume is presented immediately on the target volume, and the
original content of the target volume is lost.

FlashCopy commands can then be issued to the FlashCopy consistency group and
simultaneously for all of the FlashCopy mappings that are defined in the consistency group.
For example, when a FlashCopy start command is issued to the consistency group, all the
FlashCopy mappings in the consistency group are started concurrently. This simultaneous
start results in a PiT copy that is consistent across all of the FlashCopy mappings that are
contained in the consistency group.

Rather than using prestartfcmap or prestartfcconsistgrp, you can also use the -prep
parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.

Important: After an individual FlashCopy mapping is added to a consistency group, it can


be managed as part of the group only. Operations such as prepare, start, and stop are no
longer allowed on the individual mapping.

520 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
FlashCopy mapping states
At any point, a mapping is in one of the following states:
򐂰 Idle or copied
The source and target volumes act as independent volumes, even if a mapping exists
between the two. Read and write caching is enabled for both the source and the target
volumes. If the mapping is incremental and the background copy is complete, the mapping
records only the differences between the source and target volumes. If the connection to
both nodes in the I/O group that the mapping is assigned to is lost, the source and target
volumes are offline.
򐂰 Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
򐂰 Prepared
The mapping is ready to start. The target volume is online but not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the
Small Computer System Interface (SCSI) front end as a hardware error. If the mapping is
incremental and a previous mapping completed, the mapping records only the differences
between the source and target volumes. If the connection to both nodes in the I/O group
that the mapping is assigned to is lost, the source and target volumes go offline.
򐂰 Preparing
The target volume is online but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error.
Any changed write data for the source volume is flushed from the cache. Any read or write
data for the target volume is discarded from the cache. If the mapping is incremental and a
previous mapping completed, the mapping records only the differences between the
source and target volumes. If the connection to both nodes in the I/O group that the
mapping is assigned to is lost, the source and target volumes go offline.
򐂰 Stopped
The mapping is stopped because you ran a stop command or an I/O error occurred. The
target volume is offline and its data is lost. To access the target volume, you must restart
or delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the I/O group that the mapping is
assigned to is lost, the source and target volumes go offline.
򐂰 Stopping
The mapping is copying data to another mapping. If the background copy process is
complete, the target volume is online while the stopping copy process completes. If the
background copy process is not complete, data is discarded from the target volume cache.
The target volume is offline while the stopping copy process runs. The source volume is
accessible for I/O operations.
򐂰 Suspended
The mapping started but did not complete. Access to the metadata is lost, which causes
both the source and target volume to go offline. When access to the metadata is restored,
the mapping returns to the copying or stopping state and the source and target volumes
return online. The background copy process resumes. If the data was not flushed and was
written to the source or target volume before the suspension, it is in the cache until the
mapping leaves the suspended state.

Chapter 10. Advanced Copy Services 521


Summary of FlashCopy mapping states
Table 10-2 lists the various FlashCopy mapping states, and the corresponding states of the
source and target volumes.

Table 10-2 FlashCopy mapping state summary


State Source Target

Online/Offline Cache state Online/Offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline N/A

Stopping Online Write-back 򐂰 Online if copy N/A


complete
򐂰 Offline if copy
incomplete

Suspended Offline Write-back Offline N/A

Preparing Online Write-through Online but not N/A


accessible

Prepared Online Write-through Online but not N/A


accessible

10.1.11 Multiple target FlashCopy


A volume can be the source of multiple target volumes. A target volume can also be the
source of another target volume. But, a target volume can have only one source volume. A
source volume can have multiple target volumes, in one or multiple consistency groups. A
consistency group can contain multiple FlashCopy mappings (source-target relations). A
source volume can belong to multiple consistency groups. Figure 10-12 on page 523 shows
these different possibilities.

Every source-target relationship is a FlashCopy mapping and maintained with its own bitmap
table. There is no consistency group bitmap table.

522 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-12 Consistency groups and mappings combinations

When a source volume is in a FlashCopy mapping with multiple targets in multiple


consistency groups, it enables the copy of a single source at multiple PiTs. Therefore, multiple
versions of a single volume are kept.

Consistency group with multiple target FlashCopy


A consistency group aggregates FlashCopy mappings, not volumes. Therefore, where a
source volume has multiple FlashCopy mappings, they can be in the same or separate
consistency groups.

If a volume is the source volume for multiple FlashCopy mappings, you might want to create
separate consistency groups to separate each mapping of the same source volume.
Regardless of whether the source volume with multiple target volumes is in the same
consistency group or in separate consistency groups, the resulting FlashCopy produces
multiple identical copies of the source data.

Dependencies
When a source volume has multiple target volumes, a mapping is created for each
source-target relationship. When data is changed on the source volume, it is first copied on
the target volume because of the CoW mechanism that is used by FlashCopy.

You can create up to 256 targets for a single source volume. Therefore, a single write
operation on the source volume might result in 256 write operations (one per target volume).
This operation generates a large workload that the system cannot handle and leads to a
heavy performance impact on front-end operations.

Chapter 10. Advanced Copy Services 523


To avoid any significant impact on performance because of multiple targets, FlashCopy
creates dependencies between the targets. Dependencies can be considered as “hidden”
FlashCopy mappings that are not visible to and cannot be managed by the user. A
dependency is created between the most recent target and the previous one (in order of start
time). Figure 10-13 shows an example of a source volume with three targets.

When the three targets are started, Target T0 was started first and considered the “oldest.”
Target T1 was started next and is considered “next oldest,” and finally Target T2 was started
last and considered the “most recent” or “newest.” The “next oldest” target for T2 is T1. The
“next oldest” target for T1 is T0. T1 is newer than T2, and T0 is newer than T1.

Figure 10-13 FlashCopy dependencies example

Source read with multiple target FlashCopy


There is no specific behavior for read operations on source volumes when there are multiple
targets for that volume. The data is always read from the source.

Source write with multiple target FlashCopy (CoW)


A write to the source volume does not cause its data to be copied to all the targets. Instead, it
is copied to the most recent target volume only. For example, consider the sequence of
events that are listed in Table 10-3 for a source volume and three targets that are started at
different times. In this example, there is no background copy. The “most recent” target is
indicated with an asterisk.

Table 10-3 Sequence example of write I/Os on a source with multiple targets
Time Source volume Target T0 Target T1 Target T2

Time 0: Mapping with T0 ABC _ _ _*a Not started Not started


is started. DEF ___

Time 1: Change of “A” is GBC A _ _* Not started Not started


made on source (->“G”). DEF ___

Time 2: Mapping with T1 GBC A__ _ _ _* Not started


is started. DEF ___ ___

524 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Time Source volume Target T0 Target T1 Target T2

Time 3: Change of “E” is GBC A__ _ _ _* Not started


made on source (->“H”). DHF ___ _ E_

Time 4: Mapping with T2 GBC A__ ___ _ _ _*


is started. DHF ___ _ E_ ___

Time 5: Change of “F” is GBC A__ ___ _ _ _*


made on source (->“I”). DHI ___ _ E_ __F

Time 6: Change of “G” is JBC A__ ___ G _ _*


made on source (->“J”). DHI ___ _ E_ __F

Time 7: Stopping JBC A__ G__* Stopped


Source-T2 mapping. DHI ___ _EF

Time 8: Stopping JBC A__* Stopped Stopped


Source-T1 mapping. DHI _EF
a. “*” is the most recent target.

An intermediate target disk (not the oldest or the newest) treats the set of newer target
volumes and the true source volume as a type of composite source. It treats all older volumes
as a kind of target (and behaves like a source to them).

Target read with multiple target FlashCopy


Target reading with multiple targets depends on whether the grain was copied:
򐂰 If the grain that is being read is copied from the source to the target, the read returns data
from the target that is being read.
򐂰 If the grain is not yet copied, each of the newer mappings is examined in turn. The read is
performed from the first copy (the oldest) that is found. If none is found, the read is
performed from the source.
For example, as shown in Figure 10-13 on page 524, if the yellow grain on T2 is read, it
returns “H” because there is no newer target than T2. Therefore, the source is read.
As another example, as shown in Figure 10-13 on page 524, if the red grain on T0 is read,
it returns “E” because there are two newer targets for T0, and T1 is the oldest of them.

Target write with multiple target FlashCopy (Copy on Demand)


A write to an intermediate or the newest target volume must consider the state of the grain
within its own mapping, and the state of the grain of the next oldest mapping. Consider the
following points:
򐂰 If the grain in the target that is being written is copied and if the grain of the next oldest
mapping is not yet copied, the grain must be copied before the write can proceed to
preserve the contents of the next oldest mapping.
For example, as shown in Figure 10-13 on page 524, if the grain “G” is changed on T2,
then it must be copied to T1 (next oldest not yet copied) first and then changed on T2.
򐂰 If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target, or from the source if
none is copied. For example, as shown in Figure 10-13 on page 524, if the red grain on T0
is written, then it first is copied from T1 (data “E”). After this copy is done, the write can be
applied to the target.

Chapter 10. Advanced Copy Services 525


Table 10-4 lists the indirection layer algorithm in multiple target FlashCopy.

Table 10-4 Summary table of the FlashCopy indirection layer algorithm t


Accessed Was the grain Host I/O operation
Volume copied?
Read Write

Source No Read from the source Copy grain to most recently


volume. started target for this source,
then write to the source.

Yes Read from the source Write to the source volume.


volume.

Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.

Yes Read from the target volume. Write to the target volume.

Stopping a process in multiple target FlashCopy: Cleaning mode


When a mapping that contains a target that has dependent mappings is stopped, the
mapping enters the stopping state. Then, it begins copying all grains that are uniquely held
on the target volume of the mapping that is being stopped to the next oldest mapping that is in
the copying state. The mapping remains in the stopping state until all grains are copied, and
then enters the stopped state. This mode is referred to as the cleaning mode.

For example, if the mapping Source-T2 was stopped, the mapping enters the stopping state
while the cleaning process copies the data of T2 to T1 (next oldest). After all of the data is
copied, Source-T2 mapping enters the stopped state, and T1 is no longer dependent upon
T2. However, T0 remains dependent upon T1.

For example, as shown in Table 10-3 on page 524, if you stop the Source-T2 mapping on
“Time 7,” the grains that are not yet copied on T1 are copied from T2 to T1. Reading T1 is
then like reading the source at the time T1 was started (“Time 2”).

As another example, as shown in Table 10-3 on page 524, if you stop the Source-T1 mapping
on “Time 8,” the grains that are not yet copied on T0 are copied from T1 to T0. Reading T0 is
then like reading the source at the time T0 was started (“Time 0”).

If you stop the Source-T1 mapping while Source-T0 mapping and Source-T2 are still in
copying mode, the grains that are not yet copied on T0 are copied from T1 to T0 (next oldest).
T0 now depends upon T2.

Your target volume is still accessible while the cleaning process is running. When the system
is operating in this mode, it is possible that host I/O operations can prevent the cleaning
process from reaching 100% if the I/O operations continue to copy new data to the target
volumes.

526 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Cleaning rate
The data rate at which data is copied from the target of the mapping being stopped to the next
oldest target is determined by the cleaning rate. This property of FlashCopy mapping can be
changed dynamically. It is measured like the copyrate property, but both properties are
independent. Table 10-5 lists the relationship of the cleaning rate values to the attempted
number of grains to be split per second.

Table 10-5 Cleaning rate values


User-specified cleaning rate attribute Data 256 KB 64 KB
value copied/sec grains/sec grains/sec

1 - 10 128 KiB 0.5 2

11 - 20 256 KiB 1 4

21 - 30 512 KiB 2 8

31 - 40 1 MiB 4 16

41 - 50 2 MiB 8 32

51 - 60 4 MiB 16 64

61 - 70 8 MiB 32 128

71 - 80 16 MiB 64 256

81 - 90 32 MiB 128 512

91 - 100 64 MiB 256 1024

101 - 110 128 MiB 512 2048

111 - 120 256 MiB 1024 4096

121 - 130 512 MiB 2048 8192

131 - 140 1 GiB 4096 16384

141 - 150 2 GiB 8192 32768

10.1.12 Reverse FlashCopy


Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy mapping and without having to wait for the original copy
operation to complete. A FlashCopy source supports multiple targets (up to 256) and multiple
rollback points.

A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function
is that the reverse FlashCopy does not destroy the original target. This feature enables
processes that are using the target, such as a tape backup or tests, to continue uninterrupted.

IBM Spectrum Virtualize also can create an optional copy of the source volume to be made
before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.

Chapter 10. Advanced Copy Services 527


The production disk is instantly available with the backup data. Figure 10-14 shows an
example of reverse FlashCopy with a simple FlashCopy mapping (single target).

Figure 10-14 A reverse FlashCopy example for data restoration

This example assumes that a simple FlashCopy mapping was created between the “source”
volume and “target” volume, and no background copy is set.

When the FlashCopy mapping starts (Date of Copy1), if the source volume is changed (write
operations on grain “A”), the modified grains are first copied to target, the bitmap table is
updated, and the source grain is modified (from “A” to “G”).

At a time (“Corruption Date”), data is modified on another grain (grain “D” below), so it is first
written on the target volume and the bitmap table is updated. Unfortunately, the new data is
corrupted on source volume.

The storage administrator can then use the Reverse FlashCopy feature by completing these
steps:
1. Creating a mapping from target to source (if not created). Because FlashCopy recognizes
that the target volume of this new mapping is a source in another mapping, it does not
create another bitmap table. It uses the bitmap table instead, with its updated bits.
2. Start the new mapping. Because of the bitmap table, only the modified grains are copied.

After the restoration is complete, at the “Restored State” time, source volume data is similar to
what it was before the Corruption Date. The copy can resume with the restored data (Date of
Copy2) and for example, data on the source volume can be modified (“D” grain is changed in
“H” grain in the example below). In this last case, because “D” grain was copied, it is not
copied again on target volume.

Consistency groups are reversed by creating a set of reverse FlashCopy mappings and
adding them to a new reverse consistency group. Consistency groups cannot contain more
than one FlashCopy mapping with the same target volume.

528 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.1.13 FlashCopy and image mode volumes
FlashCopy can be used with image mode volumes. Because the source and target volumes
must be the same size, you must create a target volume with the same size as the image
mode volume when you are creating a FlashCopy mapping. To accomplish this task with the
CLI, run the svcinfo lsvdisk -bytes volumename command. The size in bytes is then used to
create the volume that is used in the FlashCopy mapping.

This method provides an exact number of bytes because image mode volumes might not line
up one-to-one on other measurement unit boundaries. Example 10-1 shows the size of the
ITSO-TSTRS001 volume. The ITSO-TSTRS002 volume is then created, which specifies the same
size.
Example 10-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_Storwize:ITSO-V7k:superuser>lsvdisk -bytes ITSO-TSTRS001
id 19
name ITSO-TSTRS001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 1073741824
type striped
......

IBM_Storwize:ITSO-V7k:superuser>mkvdisk -mdiskgrp 0 -iogrp 0 -size 1073741824


-unit b -name ITSO-TSTRS002
Virtual Disk, id [20], successfully created
IBM_Storwize:ITSO-V7k:superuser>

IBM_Storwize:ITSO-V7k:superuser>lsvdisk -delim " "


19 ITSO-TSTRS001 0 io_grp0 online 0 Pool0 1.00GB striped
6005076400A7015B0800000000000016 0 1 not_empty 0 no 0 0 Pool0 yes no 19
ITSO-TSTRS001
20 ITSO-TSTRS002 0 io_grp0 online 0 Pool0 1.00GB striped
6005076400A7015B0800000000000017 0 1 not_empty 0 no 0 0 Pool0 yes no 20
ITSO-TSTRS002
IBM_Storwize:ITSO-V7k:superuser>

Tip: Alternatively, you can use the expandvdisksize and shrinkvdisksize volume
commands to modify the size of the volume.

These actions must be performed before a mapping is created.

Chapter 10. Advanced Copy Services 529


10.1.14 FlashCopy mapping events
This section describes the events that modify the states of a FlashCopy. It also describes the
mapping events that are listed in Table 10-6.

Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application-dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.

Table 10-6 Mapping events


Mapping event Description

Create A FlashCopy mapping is created between the specified source volume


and the specified target volume. The operation fails if any one of the
following conditions is true:
򐂰 The source volume is a member of 256 FlashCopy mappings.
򐂰 The node has insufficient bitmap memory.
򐂰 The source and target volumes are different sizes.

Prepare The prestartfcmap or prestartfcconsistgrp command is directed to a


consistency group for FlashCopy mappings that are members of a
normal consistency group or to the mapping name for FlashCopy
mappings that are stand-alone mappings. The prestartfcmap or
prestartfcconsistgrp command places the FlashCopy mapping into the
Preparing state.
The prestartfcmap or prestartfcconsistgrp command can corrupt any
data that was on the target volume because cached writes are discarded.
Even if the FlashCopy mapping is never started, the data from the target
might be changed logically during the act of preparing to start the
FlashCopy mapping.

Flush done The FlashCopy mapping automatically moves from the preparing state
to the prepared state after all cached data for the source is flushed and
all cached data for the target is no longer valid.

530 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Mapping event Description

Start When all of the FlashCopy mappings in a consistency group are in the
prepared state, the FlashCopy mappings can be started. To preserve the
cross-volume consistency group, the start of all the FlashCopy mappings
in the consistency group must be synchronized correctly concerning I/Os
that are directed at the volumes by using the startfcmap or
startfcconsistgrp command.
The following actions occur during the running of the startfcmap
command or the startfcconsistgrp command:
򐂰 New reads and writes to all source volumes in the consistency group
are paused in the cache layer until all ongoing reads and writes
beneath the cache layer are complete.
򐂰 After all FlashCopy mappings in the consistency group are paused,
the internal cluster state is set to enable FlashCopy operations.
򐂰 After the cluster state is set for all FlashCopy mappings in the
consistency group, read and write operations continue on the source
volumes.
򐂰 The target volumes are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for the source and target volumes.

Modify The following FlashCopy mapping properties can be modified:


򐂰 FlashCopy mapping name.
򐂰 Clean rate.
򐂰 Consistency group.
򐂰 Copy rate (for background copy or stopping copy priority).
򐂰 Automatic deletion of the mapping when the background copy is
complete.

Stop The following separate mechanisms can be used to stop a FlashCopy


mapping:
򐂰 Issue a command.
򐂰 An I/O error occurred.

Delete This command requests that the specified FlashCopy mapping is


deleted. If the FlashCopy mapping is in the copying state, the force flag
must be used.

Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.

Copy complete After all of the source data is copied to the target and there are no
dependent mappings, the state is set to copied. If the option to
automatically delete the mapping after the background copy completes is
specified, the FlashCopy mapping is deleted automatically. If this option
is not specified, the FlashCopy mapping is not deleted automatically and
can be reactivated by preparing and starting again.

Bitmap online/offline The node failed.

Chapter 10. Advanced Copy Services 531


10.1.15 Thin-provisioned FlashCopy
FlashCopy source and target volumes can be thin-provisioned.

Source or target thin-provisioned


The most common configuration is a fully allocated source and a thin-provisioned target. By
using this configuration, the target uses a smaller amount of real storage than the source.

With this configuration, use a copyrate equal to 0 only. In this state, the virtual capacity of the
target volume is identical to the capacity of the source volume, but the real capacity (the one
that is used on the storage system) is lower, as shown on Figure 10-15. The real size of the
target volume increases with writes that are performed on the source volume, on not already
copied grains. Eventually, if the entire source volume is written (unlikely), the real capacity of
the target volume is identical to the source’s volume.

Figure 10-15 Thin-provisioned target volume

Source and target thin-provisioned


When the source and target volumes are thin-provisioned, only the data that is allocated to
the source is copied to the target. In this configuration, the background copy option has no
effect.

Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.

Thin-provisioned incremental FlashCopy


The implementation of thin-provisioned volumes does not preclude the use of incremental
FlashCopy on the same volumes. It does not make sense to have a fully allocated source
volume and then use incremental FlashCopy (which is always a full copy the first time) to copy
this fully allocated source volume to a thin-provisioned target volume. However, this action is
not prohibited.

Consider the following optional configurations:


򐂰 A thin-provisioned source volume can be copied incrementally by using FlashCopy to a
thin-provisioned target volume. Whenever the FlashCopy is performed, only data that was
modified is recopied to the target. If space is allocated on the target because of I/O to the
target volume, this space is not reclaimed with subsequent FlashCopy operations.
򐂰 A fully allocated source volume can be copied incrementally by using FlashCopy to
another fully allocated volume concurrently as it is being copied to multiple
thin-provisioned targets (taken at separate PiTs). By using this combination, a single full
backup can be kept for recovery purposes, and the backup workload is separated from the
production workload. Also, older thin-provisioned backups can be retained.

532 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.1.16 Serialization of I/O by FlashCopy
In general, the FlashCopy function in IBM Spectrum Virtualize introduces no explicit
serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and
target volumes.

However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared, and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
򐂰 The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
򐂰 The lock is held in exclusive mode while a grain is being copied from the source to the
target.

If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.

If the lock is held in shared mode and is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.

Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in shared or
exclusive mode must wait for it to be freed.

10.1.17 Event handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not
affect the handling or reporting of events for error conditions that are encountered in the I/O
path. Event handling and reporting are affected only by FlashCopy when a FlashCopy
mapping is copying or stopping, that is, actively moving data.

This section describes these scenarios.

Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O group becomes inaccessible.

FlashCopy continues with a single copy of the FlashCopy bitmap that is stored as non-volatile
in the remaining node in the source I/O group. The system metadata is updated to indicate
that the missing node no longer holds a current bitmap. When the failing node recovers or a
replacement node is added to the I/O group, the bitmap redundancy is restored.

Path failure (Path Offline state)


In a fully functioning system, all the nodes have a software representation of every volume in
the system within their application hierarchy.

Because the storage area network (SAN) that links IBM FlashSystem 9100 nodes to each
other and to the MDisks is made up of many independent links, it is possible for a subset of
the nodes to be temporarily isolated from several of the MDisks. When this situation happens,
the MDisks are said to be Path Offline on certain nodes.

Chapter 10. Advanced Copy Services 533


Other nodes: Other nodes might see the MDisks as Online because their connection to
the MDisks is still up.

Path Offline for the source volume


If a FlashCopy mapping is in the copying state and the source volume goes Path Offline, this
Path Offline state is propagated to all target volumes up to, but not including, the target
volume for the newest mapping that is 100% copied but remains in the copying state. If no
mappings are 100% copied, all the target volumes are taken offline. Path Offline is a state that
exists on a per-node basis. Other nodes might not be affected. If the source volume comes
online, the target and source volumes are brought back online.

Path Offline for the target volume


If a target volume goes Path Offline but the source volume is still online, and if any dependent
mappings exist, those target volumes also go Path Offline. The source volume remains
online.

10.1.18 Asynchronous notifications


FlashCopy raises informational event log entries for certain mapping and consistency group
state transitions. These state transitions occur as a result of configuration events that
complete asynchronously. The informational events can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user.

Other configuration events complete synchronously, and no informational events are logged
as a result of the following events:
򐂰 PREPARE_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the prepared state as a result of a user request to prepare. The user can now start (or
stop) the mapping or consistency group.
򐂰 COPY_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the idle_or_copied state when it was in the copying or stopping state. This state
transition indicates that the target disk now contains a complete copy and no longer
depends on the source.
򐂰 STOP_COMPLETED
This state transition is logged when the FlashCopy mapping or consistency group enters
the stopped state as a result of a user request to stop. It is logged after the automatic copy
process completes. This state transition includes mappings where no copying needs to be
performed. This state transition differs from the event that is logged when a mapping or
group enters the stopped state as a result of an I/O error.

10.1.19 Interoperation with Metro Mirror and Global Mirror


A volume can be part of any copy relationship (FlashCopy, Metro Mirror (MM), or Remote
Mirror). Therefore, FlashCopy can work with MM / Global Mirror (GM) to provide better
protection of the data.

For example, you can perform an MM copy to duplicate data from Site_A to Site_B, and then
perform a daily FlashCopy to back up the data to another location.

534 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: A volume cannot be part of FlashCopy, MM, or Remote Mirror if it is set to
Transparent Cloud Tiering (TCT) function.

Table 10-7 lists the supported combinations of FlashCopy and Remote Copy (RC). In the
table, RC refers to MM and GM.

Table 10-7 FlashCopy and Remote Copy interaction


Component Remote Copy primary site Remote Copy secondary site

FlashCopy source Supported Supported latency: When the


FlashCopy relationship is in the
preparing and prepared states,
the cache at the RC secondary
site operates in write-through
mode.
This process adds latency to
the latent RC relationship.

FlashCopy target This combination is supported This combination is supported


and has the following with the major restriction that
restrictions: the FlashCopy mapping cannot
򐂰 Running stop -force be copying, stopping, or
might cause the RC suspended. Otherwise, the
relationship to be fully restrictions are the same as at
resynchronized. the RC primary site.
򐂰 The code level must be
Version 6.2.x or later.
򐂰 The I/O group must be the
same.

10.1.20 FlashCopy attributes and limitations


The FlashCopy function in IBM Spectrum Virtualize features the following attributes:
򐂰 The target is the T0 copy of the source, which is known as the FlashCopy mapping target.
򐂰 FlashCopy produces an exact copy of the source volume, including any metadata that was
written by the host OS, logical volume manager (LVM), and applications.
򐂰 The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
򐂰 The source and target volumes must be the same “virtual” size.
򐂰 The source and target volumes must be on the same IBM FlashSystem 9100 system.
򐂰 The source and target volumes do not need to be in the same I/O group or storage pool.
򐂰 The storage pool extent sizes can differ between the source and target.
򐂰 The target volumes can be the source volumes for other FlashCopy mappings (cascaded
FlashCopy), but a target volume can have only one source copy.
򐂰 Consistency groups are supported to enable FlashCopy across multiple volumes at the
same time.
򐂰 The target volume can be updated independently of the source volume.
򐂰 Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the system I/O group to prevent a single point of failure (SPOF).

Chapter 10. Advanced Copy Services 535


򐂰 FlashCopy mapping and consistency groups can be automatically withdrawn after the
completion of the background copy.
򐂰 Thin-provisioned FlashCopy (or Snapshot) in the GUI uses disk space only when updates
are made to the source or target data, and not for the entire capacity of a volume copy.
򐂰 FlashCopy licensing is based on the virtual capacity of the source volumes.
򐂰 Incremental FlashCopy copies all of the data when you first start FlashCopy, and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
򐂰 Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and having to wait for the original copy
operation to complete.
򐂰 The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.

FlashCopy limitations for IBM Spectrum Virtualize V8.3 are listed in Table 10-8.

Table 10-8 FlashCopy limitations in Version 8.3


Property Maximum number

FlashCopy mappings per system 5000

FlashCopy targets per source 256

FlashCopy mappings per consistency group 512

FlashCopy consistency groups per system 500

Total FlashCopy volume capacity per I/O group 4096 TiB

10.2 Managing FlashCopy by using the GUI


It is often easier to work with the FlashCopy function from the GUI if you have a reasonable
number of host mappings. However, in enterprise data centers with many host mappings, use
the CLI to run your FlashCopy commands.

10.2.1 FlashCopy presets


The IBM Spectrum Virtualize GUI interface provides three FlashCopy presets (Snapshot,
Clone, and Backup) to simplify the more common FlashCopy operations.

Although these presets meet most FlashCopy requirements, they do not support all possible
FlashCopy options. If more specialized options are required that are not supported by the
presets, the options must be performed by using CLI commands.

This section describes the preset options and their use cases.

536 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Snapshot
This preset creates a CoW PiT copy. The snapshot is not intended to be an independent copy.
Instead, the copy is used to maintain a view of the production data at the time that the
snapshot is created. Therefore, the snapshot holds only the data from regions of the
production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.

Snapshot uses the following preset parameters:


򐂰 Background copy: None
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No
򐂰 Primary copy source pool: Target pool

Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume, and a
significant proportion of the volumes remains unchanged.

By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced. Therefore, many Snapshot copies can be used
in the environment.

Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.

For example, in Figure 10-16, the source volume user can still work on the original data
volume (such as a production volume) and the target volumes can be accessed instantly.
Users of target volumes can modify the content and perform “what-if” tests for example
(versioning). Storage administrators do not need to perform full copies of a volume for
temporary tests. However, the target volumes must remain linked to the source. Whenever
the link is broken (FlashCopy mapping stopped or deleted), the target volumes become
unusable.

Figure 10-16 FlashCopy Snapshot preset example

Chapter 10. Advanced Copy Services 537


Clone
The Clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.

Clone uses the following preset parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, there is no expectation that it is refreshed or any other need to
reference the original production data again. If the source is thin-provisioned, the target is
thin-provisioned for the auto-create target.

Backup
The Backup preset creates an incremental PiT replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.

Backup uses the following preset parameters:


򐂰 Background Copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, such as due to loss of the underlying physical controller. The user
plans to periodically update the secondary copy, and does not want to suffer from the
resource demands of creating a copy each time. Incremental FlashCopy times are faster than
full copy, which helps to reduce the window where the new backup is not yet fully effective. If
the source is thin-provisioned, the target is also thin-provisioned in this option for the
auto-create target.

Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.

Note: IBM Spectrum Virtualize in general and FlashCopy in particular are not backup
solutions on their own. The FlashCopy Backup preset, for example, does not schedule a
regular copy of your volumes. It over-writes the mapping target and does not make a copy
of it before starting a new “backup” operation. It is the user’s responsibility to handle the
target volumes (for example, saving them to tapes) and the scheduling of the FlashCopy
operations.

538 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.2.2 FlashCopy window
This section describes the tasks that you can perform at a FlashCopy level by using the
IBM Spectrum Virtualize GUI.

When using the IBM Spectrum Virtualize GUI, FlashCopy components can be seen in
different windows. Three windows are related to FlashCopy and are reachable through the
Copy Services menu, as shown in Figure 10-17.

Figure 10-17 Copy Services menu

The FlashCopy window is accessible by selecting Copy Services → FlashCopy. All the
volumes that are defined in the system are displayed. Volumes that are part of a FlashCopy
mapping appear as shown in Figure 10-18. By clicking a source volume, you can display the
list of its target volumes.

Figure 10-18 Source and target volumes that are displayed in the FlashCopy window

Chapter 10. Advanced Copy Services 539


All volumes are listed in this window, and target volumes appear twice (as a regular volume
and as a target volume in a FlashCopy mapping):
򐂰 The Consistency Group window is accessible by selecting Copy Services →
Consistency Groups. Use the Consistency Groups window, as shown in Figure 10-19, to
list the FlashCopy mappings that are part of consistency groups and part of no
consistency groups.

Figure 10-19 Consistency Groups window

򐂰 The FlashCopy Mappings window is accessible by selecting Copy Services →


FlashCopy Mappings. Use the FlashCopy Mappings window, as shown in Figure 10-20,
to display the list of mappings between source volumes and target volumes.

Figure 10-20 FlashCopy mapping pane

540 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.2.3 Creating a FlashCopy mapping
This section describes creating FlashCopy mappings for volumes and their targets.

Open the FlashCopy window from the Copy Services menu, as shown in Figure 10-21.
Select the volume for which you want to create the FlashCopy mapping. Right-click the
volume or click the Actions menu.

Figure 10-21 FlashCopy window

Multiple FlashCopy mappings: To create multiple FlashCopy mappings simultaneously,


select multiple volumes by holding down the Ctrl key and clicking the entries that you want.

Depending on whether you created the target volumes for your FlashCopy mappings or you
want the system to create the target volumes for you, the following options are available:
򐂰 If you created the target volumes, see “Creating a FlashCopy mapping with existing target
volumes” on page 542.
򐂰 If you want the system to create the target volumes for you, see “Creating a FlashCopy
mapping and target volumes” on page 547.

Chapter 10. Advanced Copy Services 541


Creating a FlashCopy mapping with existing target volumes
To use target volumes for the FlashCopy mappings, complete the following steps:

Attention: When starting a FlashCopy mapping from a source volume to a target volume,
data that is on the target is over-written. The system does not prevent you from selecting a
target volume that is mapped to a host and already contains data.

1. Right-click the volume that you want to create a FlashCopy mapping for, and select
Advanced FlashCopy → Use Existing Target Volumes, as shown in Figure 10-22.

Figure 10-22 Creating a FlashCopy mapping with an existing target

The Create FlashCopy Mapping window opens, as shown in Figure 10-23 on page 543. In
this window, you create the mapping between the selected source volume and the target
volume with which you want to create a mapping. Click Add.

542 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-23 Selecting source and target for a FlashCopy mapping

Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.

Volumes that are a target in a FlashCopy mapping cannot be a target in a new


mapping. Therefore, only volumes that are not targets can be selected.

Chapter 10. Advanced Copy Services 543


To remove a mapping that was created, click , as shown in Figure 10-24.

2. Click Next after you create all of the mappings that you need, as shown in Figure 10-24.

Figure 10-24 Viewing the source and target at creation time

3. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-25 on page 545. For
more information about the presets, see 10.2.1, “FlashCopy presets” on page 536:
– Snapshot: Creates a PiT snapshot copy of the source volume.
– Clone: Creates a PiT replica of the source volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target volumes.

Note: If you want to create a simple Snapshot of a volume, you likely want the target
volume to be defined as thin-provisioned to save space on your system. If you use an
existing target, ensure that it is thin-provisioned first. Using the Snapshot preset does
not make the system check whether the target volume is thin-provisioned.

544 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-25 FlashCopy mapping preset selection

When selecting a preset, some options, such as Background Copy Rate, Incremental, and
Delete mapping after completion, are automatically changed or selected. You can still
change the automatic settings, but this change is not recommended for the following
reasons:
– For example, if you select the Backup preset but then clear Incremental or select
Delete mapping after completion, you lose the benefits of Incremental FlashCopy
and must copy the entire source volume each time you start the mapping.
– As another example, if you select the Snapshot preset but then change the
Background Copy Rate, you have a full copy of your source volume.
For more information about the Background Copy Rate and Cleaning Rate options, see
Table 10-1 on page 518 or Table 10-5 on page 527.
When your FlashCopy mapping setup is ready, click Next.

Chapter 10. Advanced Copy Services 545


4. You can choose whether to add the mappings to a consistency group, as shown in
Figure 10-26.
If you want to include this FlashCopy mapping in a consistency group, select Yes, add the
mappings to a consistency group and select the consistency group from the drop-down
menu.

Figure 10-26 Selecting a consistency group for the FlashCopy mapping

5. It is possible to add a FlashCopy mapping to a consistency group or to remove a


FlashCopy mapping from a consistency group after they are created. If you do not know
what to do at this stage, you can change it later. Click Finish.

The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and Consistency Groups.

Note: Creating a FlashCopy mapping does not automatically start any copy. You must
manually start the mapping.

546 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Creating a FlashCopy mapping and target volumes
To create target volumes for FlashCopy mapping, complete the following steps:
1. Right-click the volume that you want to create a FlashCopy mapping for, and select
Advanced FlashCopy → Create New Target Volumes, as shown in Figure 10-27.

Figure 10-27 Creating a FlashCopy mapping and creating targets

Chapter 10. Advanced Copy Services 547


2. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-28. For more
information about the presets, see 10.2.1, “FlashCopy presets” on page 536.
– Snapshot: Creates a PiT snapshot copy of the source volume.
– Clone: Creates a PiT replica of the source volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target volumes.

Note: If you want to create a simple Snapshot of a volume, you probably want the target
volume to be defined as thin-provisioned to save space on your system. If you use an
existing target, ensure that it is thin-provisioned first. Using the Snapshot preset does
not make the system check whether the target volume is thin-provisioned.

Figure 10-28 FlashCopy mapping preset selection

When selecting a preset, some options such as Background Copy Rate, Incremental, and
Delete mapping after completion are automatically changed or selected. You can still
change the automatic settings, but this change is not recommended for the following
reasons:
– For example, if you select the Backup preset but then clear Incremental or select
Delete mapping after completion, you lose the benefits of Incremental FlashCopy.
You must copy the entire source volume each time you start the mapping.
– As another example, if you select the Snapshot preset but then change the
Background Copy Rate, you have a full copy of your source volume.
For more information about the Background Copy Rate and Cleaning Rate options, see
Table 10-1 on page 518 or see Table 10-5 on page 527.

548 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When your FlashCopy mapping setup is ready, click Next.
3. You can choose whether to add the mappings to a consistency group, as shown in
Figure 10-29.
If you want to include this FlashCopy mapping in a consistency group, select Yes, add the
mappings to a consistency group, and select the consistency group from the drop-down
menu.

Figure 10-29 Selecting a consistency group for the FlashCopy mapping

4. It is possible to add a FlashCopy mapping to a consistency group or to remove a


FlashCopy mapping from a consistency group after they are created. If you do not know
what to do at this stage, you can change it later. Click Next.

Chapter 10. Advanced Copy Services 549


5. In this step, the system asks the user to select the pool that is used to automatically create
new targets, as shown in Figure 10-30. Click Next.

Figure 10-30 Selecting the pool

6. In this step, the system asks the user how to define the new volumes that are created, as
shown in Figure 10-31 on page 551. It can be None, Thin-provisioned, or Inherit from
source volume. If you select Inherit from source volume, the system checks the type of
the source volume and then creates a target of the same type. Click Finish.

550 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-31 Selecting the type of volumes for the created targets

Note: If you selected multiple source volumes to create FlashCopy mappings and you
selected Inherit properties from source Volume, it applies to each created target
volume. For example, if you selected a compressed volume and a generic volume as
sources for the new FlashCopy mappings, the system creates a compressed target and a
generic target.

The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and Consistency Groups.

10.2.4 Single-click snapshot


The Snapshot preset creates a PiT backup of production data. A snapshot is not intended to
be an independent copy. Instead, it is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, a snapshot holds only the data from the regions
of the production volume that changed since the snapshot was created. Because the
Snapshot preset uses thin provisioning, only the capacity that is required for the changes is
used.

Snapshot uses the following preset parameters:


򐂰 Background copy: No
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No
򐂰 Primary copy source pool: Target pool

Chapter 10. Advanced Copy Services 551


To create and start a snapshot, complete the following steps:
1. Open the FlashCopy window by selecting Copy Services → FlashCopy.
2. Select the volume that you want to create a snapshot of, and right-click it or select
Actions → Create Snapshot, as shown in Figure 10-32.

Figure 10-32 Single-click snapshot creation and start

3. You can select multiple volumes at a time, which creates as many snapshots
automatically. Then, the system automatically groups the FlashCopy mappings in a new
consistency group, as shown in Figure 10-33 on page 553.

552 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-33 Single-click snapshot creation and start

4. For each selected source volume, the following actions occur:


– A FlashCopy mapping is automatically created. It is named by default fcmapXX.
– A target volume is created. By default, the source name is appended with a _XX suffix.
– A consistency group is created for each mapping unless multiple volumes were
selected. Consistency groups are named by default fccstgrpX.

The newly created consistency group is automatically started.

10.2.5 Single-click clone


The Clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.

The Clone preset uses the following parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Chapter 10. Advanced Copy Services 553


To create and start a clone, complete the following steps:
1. Open the FlashCopy window by selecting Copy Services → FlashCopy.
2. Select the volume that you want to create a clone of, and right-click it or select Actions →
Create Clone, as shown in Figure 10-34.

Figure 10-34 Single-click clone creation and start

3. You can select multiple volumes at a time, which creates many snapshots automatically.
The system then automatically groups the FlashCopy mappings in a new consistency
group, as shown in Figure 10-35 on page 555.

554 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-35 Selection single-click clone creation and start

4. For each selected source volume, the following actions occur:


– A FlashCopy mapping is automatically created. It is named by default fcmapXX.
– A target volume is created. The source name is appended with an _XX suffix.
– A consistency group is created for each mapping unless multiple volumes were
selected. Consistency groups are named by default fccstgrpX.
– The newly created consistency group is automatically started.

10.2.6 Single-click backup


The Backup preset creates a PiT replica of the production data. After the copy completes, the
backup view can be refreshed from the production data with minimal copying of data from the
production volume to the backup volume.

The Backup preset uses the following parameters:


򐂰 Background Copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Chapter 10. Advanced Copy Services 555


To create and start a backup, complete the following steps:
1. Open the FlashCopy window by selecting Copy Services → FlashCopy.
2. Select the volume that you want to create a backup of, and right-click it or select
Actions → Create Backup, as shown in Figure 10-36.

Figure 10-36 Single-click backup creation and start

3. You can select multiple volumes at a time, which creates many snapshots automatically.
The system then automatically groups the FlashCopy mappings in a new consistency
group, as shown Figure 10-37 on page 557.

556 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-37 Selecting single-click backup creation and start

4. For each selected source volume, the following actions occur:


– A FlashCopy mapping is automatically created. It is named by default fcmapXX.
– A target volume is created. It is named after the source name with a _XX suffix.
– A consistency group is created for each mapping unless multiple volumes were
selected. Consistency groups are named by default fccstgrpX.
– The newly created consistency group is automatically started.

10.2.7 Creating a FlashCopy consistency group


To create a FlashCopy consistency group in the GUI, complete the following steps:
1. Open the Consistency Groups window by clicking Copy Services → Consistency
Groups. Click Create Consistency Group, as shown in Figure 10-38.

Figure 10-38 Creating a consistency group

Chapter 10. Advanced Copy Services 557


2. Enter the FlashCopy consistency group name that you want to use and click Create, as
shown in Figure 10-39.

Figure 10-39 Entering the name of new consistency group

Consistency group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The volume name can be 1 - 63 characters.

Ownership groups: You can select from the drop-down list one existing ownership group.

10.2.8 Creating FlashCopy mappings


To create a FlashCopy Mapping in the GUI, complete the following steps:
1. Open the Consistency Groups window by clicking Copy Services → Consistency
Groups. This example assumes that the source and target volumes were created
previously.
2. Select the consistency group in which you want to create the FlashCopy mapping. If you
prefer not to create a FlashCopy mapping in a consistency group, select Not in a Group,
and right-click the selected option or select Actions → Create FlashCopy Mapping, as
shown in Figure 10-40 on page 559.

558 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-40 Creating a FlashCopy mapping

3. Select a volume in the source volume column by using the drop-down menu. Then, select
a volume in the target volume column by using the drop-down menu. Click Add, as shown
in Figure 10-41.

Figure 10-41 Selecting the source and target volumes for the FlashCopy mapping

Chapter 10. Advanced Copy Services 559


Repeat this step to create other mappings. To remove a mapping that was created, click
. Click Next.

Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.

Volumes that are target volumes in another FlashCopy mapping cannot be the target of
a new FlashCopy mapping. Therefore, they do not appear in the list.

4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-42. For more
information about the presets, see 10.2.1, “FlashCopy presets” on page 536.
– Snapshot: Creates a PiT snapshot copy of the source volume.
– Clone: Creates a PiT replica of the source volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target volumes.

Figure 10-42 FlashCopy mapping preset selection

560 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When selecting a preset, some options, such as Background Copy Rate, Incremental, and
Delete mapping after completion, are automatically changed or selected. You can still
change the automatic settings, but this action is not recommended for the following
reasons:
– For example, if you select the Backup preset but then clear Incremental or select
Delete mapping after completion, you lose the benefits of Incremental FlashCopy.
You must copy the entire source volume each time you start the mapping.
– As another example, if you select the Snapshot preset but then change the
Background Copy Rate, you have with a full copy of your source volume.
For more information about the Background Copy Rate and Cleaning Rate options, see
Table 10-1 on page 518 or see Table 10-5 on page 527.
5. When your FlashCopy mapping setup is ready, click Finish.

10.2.9 Showing related volumes


To show related volumes for a specific FlashCopy mapping, complete the following steps:
1. Open the Copy Services FlashCopy Mappings window.
2. Right-click a FlashCopy mapping and select Show Related Volumes, as shown in
Figure 10-43. Also, depending on which window you are inside Copy Services, you can
right-click the mappings and select Show Related Volumes.

Figure 10-43 Showing related volumes for a mapping, a consistency group, or another volume

Chapter 10. Advanced Copy Services 561


3. In the Related Volumes window, you can see the related mapping for a volume, as shown
in Figure 10-44. Click one of these volumes to see its properties.

Figure 10-44 Showing related volumes list

10.2.10 Moving FlashCopy mappings across consistency groups


To move one or multiple FlashCopy mappings to a consistency group, complete the following
steps:
1. Open the FlashCopy, Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to move and select Move to
Consistency Group, as shown in Figure 10-45 on page 563.

562 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-45 Moving a FlashCopy mapping to a consistency group

Note: You cannot move a FlashCopy mapping that is in a copying, stopping, or


suspended state. The mapping should be idle, copied, or stopped to be moved.

3. In the Move FlashCopy Mapping to Consistency Group window, select the consistency
group for the FlashCopy mappings selection by using the drop-down menu, as shown in
Figure 10-46.

Figure 10-46 Selecting the consistency group for where to move the FlashCopy mapping

4. Click Move to Consistency Group to confirm your changes.

Chapter 10. Advanced Copy Services 563


10.2.11 Removing FlashCopy mappings from consistency groups
To remove one or multiple FlashCopy mappings from a consistency group, complete the
following steps:
1. Open the FlashCopy Consistency Groups window or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to remove and select Remove from
Consistency Group, as shown in Figure 10-47.

Note: Only FlashCopy mappings that belong to a consistency group can be removed.

Figure 10-47 Removing FlashCopy mappings from a consistency group

3. In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 10-48 on page 565.

564 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-48 Confirming the selection of mappings to be removed

10.2.12 Modifying a FlashCopy mapping


To modify a FlashCopy mapping, complete the following steps:
1. Open the FlashCopy Consistency Groups window or FlashCopy Mappings window.
Right-click the FlashCopy mapping that you want to edit and select Edit Properties, as
shown in Figure 10-49.

Figure 10-49 Editing FlashCopy mapping properties

Chapter 10. Advanced Copy Services 565


Note: It is not possible to select multiple FlashCopy mappings to edit all their properties
concurrently.

2. In the Edit FlashCopy Mapping window, you can modify the background copy rate and the
cleaning rate for a selected FlashCopy mapping, as shown in Figure 10-50.

Figure 10-50 Editing copy and cleaning rates of a FlashCopy mapping

For more information about the “Background Copy Rate” and the “Cleaning Rate”, see
Table 10-1 on page 518 or Table 10-5 on page 527.
3. Click Save to confirm your changes.

10.2.13 Renaming FlashCopy mappings


To rename one or multiple FlashCopy mappings, complete the following steps:
1. Open the FlashCopy Consistency Groups window or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to rename and select Rename
Mapping, as shown in Figure 10-51 on page 567.

566 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-51 Renaming FlashCopy mappings

3. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to each FlashCopy mapping and click Rename, as shown in Figure 10-52.

FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.

Figure 10-52 Renaming the selected FlashCopy mappings

Chapter 10. Advanced Copy Services 567


Renaming a consistency group
To rename a consistency group, complete the following steps:
1. Open the Consistency Groups window.
2. Right-click the consistency group that you want to rename and select Rename, as shown
in Figure 10-53.

Figure 10-53 Renaming a consistency group

3. Enter the new name that you want to assign to the consistency group and click Rename,
as shown in Figure 10-54.

Note: It is not possible to select multiple consistency groups to edit their names
concurrently.

Figure 10-54 Renaming the selected consistency group

568 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Consistency group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.

10.2.14 Deleting FlashCopy mappings


To delete one or multiple FlashCopy mappings, complete the following steps:
1. Open the FlashCopy consistency groups window or FlashCopy Mappings window.
Right-click the FlashCopy mappings that you want to delete and select Delete Mapping,
as shown in Figure 10-55.
.

Figure 10-55 Deleting FlashCopy mappings

Chapter 10. Advanced Copy Services 569


2. The Delete FlashCopy Mapping window opens, as shown in Figure 10-56. In the Verify
the number of FlashCopy mappings that you are deleting field, enter the number of
volumes that you want to remove. This verification was added to help avoid deleting the
wrong mappings.

Figure 10-56 Confirming the selection of FlashCopy mappings to be deleted

3. If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, click the Delete the FlashCopy mapping even
when the data on the target volume is inconsistent, or if the target volume has other
dependencies option. Click Delete.

570 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.2.15 Deleting a FlashCopy consistency group

Important: Deleting a consistency group does not delete the FlashCopy mappings that it
contains.

To delete a consistency group, complete the following steps:


1. Open the Consistency Groups window.
2. Right-click the consistency group that you want to delete and select Delete, as shown in
Figure 10-57.

Figure 10-57 Deleting a consistency group

3. A warning message is displayed, as shown in Figure 10-58. Click Yes.

Figure 10-58 Confirming the consistency group deletion

Chapter 10. Advanced Copy Services 571


10.2.16 Starting FlashCopy mappings

Important: Only FlashCopy mappings that do not belong to a consistency group can be
started individually. If FlashCopy mappings are part of a consistency group, they can be
started together only by using the consistency group start command.

The start command defines the PiT, which is the moment that is used as a reference (T0) for
all subsequent operations on the source and the target volumes.

To start one or multiple FlashCopy mappings that do not belong to a consistency group,
complete the following steps:
1. Open the FlashCopy consistency groups window or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to start and select Start, as shown in
Figure 10-59.

Figure 10-59 Starting FlashCopy mappings

You can check the FlashCopy state and the progress of the mappings in the Status and
Progress columns of the table, as shown in Figure 10-60.

Figure 10-60 FlashCopy mappings status and progress examples

FlashCopy Snapshots depend on the source volume and should be in a “copying” state if the
mapping is started.

572 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
FlashCopy clones and the first occurrence of FlashCopy backup can take a long time to
complete, depending on the size of the source volume and on the copyrate value. The next
occurrences of FlashCopy backups are faster because only the changes that were made
during two occurrences are copied.

For more information about FlashCopy starting operations and states, see 10.1.10, “Starting
FlashCopy mappings and consistency groups” on page 520.

10.2.17 Stopping FlashCopy mappings

Important: Only FlashCopy mappings that do not belong to a consistency group can be
stopped individually. If FlashCopy mappings are part of a consistency group, they can be
stopped together only by using the consistency group stop command.

The only reason to stop a FlashCopy mapping is for incremental FlashCopy. When the first
occurrence of an incremental FlashCopy is started, a full copy of the source volume is made.
When 100% of the source volume is copied, the FlashCopy mapping does not stop
automatically and a manual stop can be performed. The target volume is available for read
and write operations, during the copy, and after the mapping is stopped.

In any other case, stopping a FlashCopy mapping interrupts the copy and resets the bitmap
table. Because only part of the data from the source volume was copied, the copied grains
might be meaningless without the remaining grains. Therefore, the target volumes are placed
offline and are unusable, as shown in Figure 10-61.

Figure 10-61 Showing target volumes state and FlashCopy mappings status

Chapter 10. Advanced Copy Services 573


To stop one or multiple FlashCopy mappings that do not belong to a consistency group,
complete the following steps:
1. Open the FlashCopy consistency groups window or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to stop and select Stop, as shown in
Figure 10-62.

Figure 10-62 Stopping FlashCopy mappings

Note: FlashCopy mappings can be in a stopping state for a long time if you created
dependencies between several targets. It is in a cleaning mode. For more information
about dependencies and stopping process, see “Stopping a process in multiple target
FlashCopy: Cleaning mode” on page 526.

10.2.18 Memory allocation for FlashCopy


Copy Services features require that small amounts of volume cache be converted from cache
memory into bitmap memory to allow the functions to operate at an I/O group level. If you do
not have enough bitmap space that is allocated when you try to use one of the functions, you
cannot complete the configuration. The total memory that can be dedicated to these functions
is not defined by the physical memory in the system. The memory is constrained by the
software functions that use the memory.

For every FlashCopy mapping that is created on an IBM Spectrum Virtualize system, a
bitmap table is created to track the copied grains. By default, the system allocates 20 MiB of
memory for a minimum of 10 TiB of FlashCopy source volume capacity and 5 TiB of
incremental FlashCopy source volume capacity.

Depending on the grain size of the FlashCopy mapping, the memory capacity usage is
different. 1 MiB of memory provides the following volume capacity for the specified I/O group:
򐂰 For FlashCopy clones and snapshots with 256 KiB grains size, 2 TiB of total FlashCopy
source volume capacity is provided.
򐂰 For FlashCopy clones and snapshots with 64 KiB grains size, 512 GiB of total FlashCopy
source volume capacity is provided.

574 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 For Incremental FlashCopy, with 256 KiB grains size, 1 TiB of total incremental FlashCopy
source volume capacity is provided.
򐂰 For Incremental FlashCopy, with 64 KiB grains size, 256 GiB of total incremental
FlashCopy source volume capacity is provided.

To calculate the memory requirements and confirm that your system can accommodate the
total installation size, see Table 10-9.

Table 10-9 Memory allocation for FlashCopy services


Minimum allocated Default allocated Maximum allocated Minimuma
bitmap space bitmap space bitmap space functionality when
using the default
values

0 20 MiB 2 GiB 10 TiB of FlashCopy


source volume
capacity.
5 TiB of incremental
FlashCopy source
volume capacity.
a. The actual amount of functionality might increase based on settings, such as grain size and
strip size.

FlashCopy includes the FlashCopy function, Global Mirror with Change Volumes (GMVC),
and active-active (IBM HyperSwap) relationships.

For multiple FlashCopy targets, you must consider the number of mappings. For example, for
a mapping with a grain size of 256 KiB, 8 KiB of memory enables one mapping between a
16 GiB source volume and a 16 GiB target volume. Alternatively, for a mapping with a 256 KiB
grain size, 8 KiB of memory enables two mappings between one 8 GiB source volume and
two 8 GiB target volumes.

When creating a FlashCopy mapping, if you specify an I/O group other than the I/O group of
the source volume, the memory accounting goes toward the specified I/O group, not toward
the I/O group of the source volume.

When creating FlashCopy relationships or mirrored volumes, more bitmap space is allocated
automatically by the system if required.

For FlashCopy mappings, only one I/O group consumes bitmap space. By default, the I/O
group of the source volume is used.

When you create a reverse mapping, such as when you run a restore operation from a
snapshot to its source volume, a bitmap is created.

When you configure GMVC, two internal FlashCopy mappings are created for each change
volume.

You can modify the resource allocation for each I/O group of your system. For more
information, see IBM Knowledge Center and expand Command-line interface → Clustered
system commands → chiogrp.

Chapter 10. Advanced Copy Services 575


10.3 Transparent Cloud Tiering
TCT is a function of IBM Spectrum Virtualize that uses FlashCopy mechanisms to produce a
PiT snapshot of the data. TCT helps to increase the flexibility to protect and transport data to
public or private cloud infrastructures. This technology is built on top of IBM Spectrum
Virtualize Software capabilities. TCT uses the cloud to store snapshot targets and provides
alternatives to restore snapshots from the private and public cloud of an entire volume or set
of volumes.

TCT can help to solve business needs that require duplication of data of your source volume.
Volumes can remain online and active while you create snapshot copies of the data sets. TCT
operates below the host OS and its cache. Therefore, the copy is not apparent to the host.

IBM Spectrum Virtualize has built-in software algorithms that enable the TCT function to
securely interact, for example with Information Dispersal Algorithms (IDA), which is
essentially the interface to IBM Cloud Object Storage.

Object storage refers to the entity in which IBM Cloud Object Storage organizes, manages,
and stores units of data. To transform these snapshots of traditional data into object storage,
the storage nodes and the IDA import the data and transform it into a number of metadata
and slices. The object can be read by using a subset of those slices.

When an object storage entity is stored as cloud object storage, the objects must be
manipulated or managed as a whole unit. Therefore, objects cannot be accessed or updated
partially.

IBM Spectrum Virtualize uses internal software components to support an HTTP-based


REST application programming interface (API) to interact with an external cloud service
provider (CSP) or private cloud.

For more information about the IBM Cloud Object Storage portfolio, see IBM Cloud Object
Storage.

Demonstration: The IBM Client Demonstration Center has a demonstration that is


available at this web page.

10.3.1 Considerations for using Transparent Cloud Tiering


TCT can help to address certain business needs. When considering whether to use TCT, you
should adopt a combination of business and technical views of the challenges and determine
whether TCT can solve both of those needs.

Using TCT can help businesses to manipulate data as shown in the following examples:
򐂰 Creating a consistent snapshot of dynamically changing data.
򐂰 Creating a consistent snapshot of production data to facilitate data movement or migration
between systems running at different locations.
򐂰 Creating a snapshot of production data sets for application development and testing.
򐂰 Creating a snapshot of production data sets for quality assurance.
򐂰 Using secure data tiering for off-premises cloud providers.

576 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
From a technical standpoint, ensure that you evaluate the network capacity and bandwidth
requirements to support your data migration to off-premises infrastructure. To maximize
productivity, you must match your amount of data that must be transmitted off the cloud plus
your network capacity.

From a security standpoint, ensure that your on-premises or off-premises cloud infrastructure
supports your requirements in terms of methods and level of encryption.

Regardless of your business needs, TCT within the IBM Spectrum Virtualize can provide
opportunities to manage the exponential data growth and manipulate data at low cost.

Today, many CSPs offer several storage-as-services solutions, such as content repository,
backup, and archive. Combining all of these services, IBM Spectrum Virtualize can help you
solve many challenges that are related to rapid data growth, scalability, and manageability at
attractive costs.

10.3.2 Transparent Cloud Tiering as a backup solution and for data migration
TCT can also be used as backup and data migration solution. In certain conditions, it can be
easily applied to eliminate the downtime that is associated with the need to import and export
data.

When TCT is applied as your backup strategy, IBM Spectrum Virtualize uses the same
FlashCopy functions to produce PiT snapshot of an entire volume or set of volumes.

To ensure the integrity of the snapshot, it might be necessary to flush the host OS and
application cache of any outstanding reads or writes before the snapshot is performed. Failing
to flush the host OS and application cache can produce inconsistent and useless data.

Many OSs and applications provide mechanisms to stop I/O operations and ensure that all
data is flushed from host cache. If these mechanisms are available, they can be used with
snapshot operations. When these mechanisms are not available, it might be necessary to
flush the cache manually by quiescing the application and unmounting the file system or
logical drives.

When choosing IBM Cloud Object Storage as a backup solution, be aware that the object
storage must be managed as a whole. Backup and restore of individual files, folders, and
partitions are not possible.

To interact with CSPs or a private cloud, IBM Spectrum Virtualize requires interaction with the
correct architecture and specific properties. Conversely, CSPs offered attractive prices per
object storage in cloud and deliver an easy-to-use interface. Normally, cloud providers offer
low-cost prices for object storage space, and charges are applied for the cloud outbound
traffic only.

10.3.3 Restoring data by using Transparent Cloud Tiering


TCT can also be used to restore data from any snapshot that is stored in cloud providers.
When the cloud accounts’ technical and security requirements are met, the storage objects in
the cloud can be used as a data recovery solution. The recovery method is similar to a
backup, except that the reverse direction is applied. TCT running on IBM Spectrum Virtualize
queries for object storage that is stored in a cloud infrastructure. It enables users to restore
the objects into a new volume or set of volumes.

Chapter 10. Advanced Copy Services 577


This approach can be used for various applications, such as recovering your production
database application after an errant batch process causes extensive damage.

Note: You should always consider the bandwidth characteristics and network capabilities
when choosing to use TCT.

The restoration of individual files by using TCT is not possible. As mentioned, object storage
is unlike a file or a block, so object storage must be managed as a whole unit piece of storage
and not partially. Cloud object storage is accessible by using an HTTP-based REST API.

10.3.4 Transparent Cloud Tiering restrictions


This section describes the following restrictions that must be considered before using TCT:
򐂰 Because the object storage is normally accessed by using the HTTP protocol on top of a
TPC/IP stack, all traffic that is associated with cloud service flows through the node
management ports.
򐂰 The size of cloud-enabled volumes cannot change. If the size of the volume changes, a
snapshot must be created so that a new object storage is constructed.
򐂰 TCT cannot be applied to volumes that are part of traditional copy services, such as
FlashCopy, MM, GM, and HyperSwap.
򐂰 A volume containing two physical copies in two different storage pools cannot be part of
TCT.
򐂰 Cloud tiering snapshots cannot be taken from a volume that is part of a migration activity
across storage pools.
򐂰 Because VMware vSphere Virtual Volumes (VVOLs) are managed by a specific VMware
application, these volumes are not candidates for TCT.

10.4 Implementing Transparent Cloud Tiering


This section describes the steps and requirements to implement TCT by using IBM Spectrum
Virtualize.

10.4.1 Domain name server configuration


Because most IBM Cloud Object Storage is managed and accessible by using the HTTP
protocol, the domain name server (DNS) setting is an important requirement to ensure
consistent resolution of domain names to internet resources.

Using your IBM Spectrum Virtualize management GUI, select Settings → System → DNS
and insert your IPv4 or IPv6 address. The DNS name can be anything that you want, and is
used as a reference. Click Save after you complete the choices, as shown in Figure 10-63 on
page 579.

578 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-63 DNS settings

10.4.2 Enabling Transparent Cloud Tiering


After you complete the DNS settings, you can enable the TCT function in IBM Spectrum
Virtualize system by completing the following steps:
1. Using the IBM Spectrum Virtualize GUI, select Settings → System → Transparent
Cloud Tiering and click Enable Cloud Connection, as shown in Figure 10-64. The TCT
wizard starts and shows the welcome warning.

Figure 10-64 Enabling Cloud Tiering

Note: It is important to implement encryption before enabling cloud connecting.


Encryption protects your data from attacks during the transfer to the external cloud
service. Because the HTTP protocol is used to connect to cloud infrastructure, it is likely
to start transactions by using the internet. For the purposes of this publication,
encryption is not enabled on the system.

2. Click Next to continue. You must select one of three CSPs:


– IBM Cloud
– OpenStack Swift
– Amazon S3

Chapter 10. Advanced Copy Services 579


Figure 10-65 shows the available options.

Figure 10-65 Selecting a cloud service provider

3. In the next window, you must complete the settings of the cloud provider, credentials, and
security access keys. The required settings can change depending on your CSP. An
example of an empty form for an IBM Cloud connection is shown in Figure 10-66 on
page 581.

580 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-66 Entering the cloud service provider information

4. Review your settings and click Finish, as shown in Figure 10-67.

Figure 10-67 Cloud connection summary

Chapter 10. Advanced Copy Services 581


5. The cloud credentials can be viewed and updated at any time by using the function icons
in the left side of the GUI or by clicking Settings → Systems → Transparent Cloud
Tiering. From this window, you can also verify the status, the data usage statistics, and
the upload and download bandwidth limits that are set to support this function.
In the account information window, you can visualize your cloud account information. This
window also enables you to remove the account.
An example of visualizing your cloud account information is shown in Figure 10-68.

Figure 10-68 Enabled Transparent Cloud Tiering window

10.4.3 Creating cloud snapshots


To manage the cloud snapshots, IBM Spectrum Virtualize provides a section in the GUI that is
named Cloud Volumes. This section shows you how to add the volumes that are going to be
part of the TCT. As described in 10.3.4, “Transparent Cloud Tiering restrictions” on page 578,
cloud snapshot is available only for volumes that do not have a relationship to the list of
restrictions previously mentioned.

Any volume can be added to the cloud volumes. However, snapshots work only for volumes
that are not related to any other copy service.

582 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To create and cloud snapshots, complete the following steps:
1. Click Volumes → Cloud Volumes, as shown in Figure 10-69.

Figure 10-69 Cloud Volumes menu

After you select Cloud Volumes, a new window opens, and you can use the GUI to select
one or more volumes on which you need to enable a cloud snapshot, or you can add
volumes to the list, as shown in Figure 10-70.

Figure 10-70 Cloud Volumes window

Chapter 10. Advanced Copy Services 583


2. Click Add Volumes to enable cloud-snapshot on the volumes. A new window opens, as
shown in Figure 10-71. Select the volumes that you want to enable cloud tiering for and
click Next.

Figure 10-71 Adding volumes to cloud tiering

3. IBM Spectrum Virtualize GUI provides two options for you to select:
– If the first option is selected, the system decides what type of snapshot is created
based on previous objects for each selected volume. If a full copy (full snapshot) of a
volume already is created, then the system makes an incremental copy of the volume.
– The second option creates a full snapshot of one or more selected volumes. You can
select the second option for a first occurrence of a snapshot and click Finish, as shown
in Figure 10-72. You can also select the second option, even if another full copy of the
volume exists.

Figure 10-72 Selecting whether a full copy is made or the system decides

584 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The Cloud Volumes window shows complete information about the volumes and their
snapshots. The GUI shows the following information:
򐂰 Name of the volume
򐂰 ID of the volume that is assigned by IBM Spectrum Virtualize
򐂰 The snapshot size
򐂰 The date and time that the last snapshot was created
򐂰 The number of snapshots that are taken for every volume
򐂰 The snapshot status
򐂰 The restore status
򐂰 The volume group for a set of volumes
򐂰 The volume unique identifier (UID)

Figure 10-73 shows an example of a Cloud Volumes list.

Figure 10-73 Cloud Volumes list example

The Actions menu in the Cloud Volumes window enables you to create and manage
snapshots. Also, you can use the menu to cancel, disable, and restore snapshots to volumes,
as shown in Figure 10-74.

Figure 10-74 Available actions in the Cloud Volumes window

Chapter 10. Advanced Copy Services 585


10.4.4 Managing cloud snapshots
To manage volume cloud snapshots, open the Cloud Volumes window, right-click the volume
that you want to manage the snapshots from, and select Manage Cloud Snapshot.

“Managing” a snapshot deletes one or multiple versions. The list of PiT copies appears and
provides more information about their status, type, and snapshot date, as shown in
Figure 10-75.

Figure 10-75 Deleting versions of a volume’s snapshots

From this window, an administrator can delete old snapshots (old PiT copies) if they are no
longer needed. The most recent copy cannot be deleted. If you want to delete the most recent
copy, you must first disable cloud tiering for the specified volume.

10.4.5 Restoring cloud snapshots


This option enables IBM Spectrum Virtualize to restore snapshots from the cloud to the
selected volumes or to create volumes with the restored data.

If the cloud account is shared among systems, IBM Spectrum Virtualize queries the
snapshots that are stored in the cloud, and enables you to restore to a new volume. To restore
a volume’s snapshot, complete the following steps:
1. Open the Cloud Volumes window.
2. Right-click a volume and select Restore, as shown in Figure 10-76 on page 587.

586 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-76 Selecting a volume to restore a snapshot from

3. A list of available snapshots is displayed. The snapshots date (PiT), their type (full or
incremental), their state, and their size are shown, as shown in Figure 10-77. Select the
version that you want to restore and click Next.

Figure 10-77 Selecting a snapshot version to restore

If the snapshot version that you selected has later generations (more recent snapshot
dates), the newer copies are removed from the cloud.

Chapter 10. Advanced Copy Services 587


4. The IBM Spectrum Virtualize GUI provides two options to restore the snapshot from cloud.
You can restore the snapshot from cloud directly to the selected volume, or create a
volume to restore the data on, as shown in Figure 10-78. Make a selection and click Next.

Figure 10-78 Restoring a snapshot on an existing volume or on a new volume

Note: Restoring a snapshot on the volume overwrites the data on the volume. The
volume is taken offline (no read or write access), and the data from the PiT copy of the
volume is written. The volume returns online when all data is restored from the cloud.

5. If you selected the Restore to a new Volume option, you must enter the following
information for the new volume to be created with the snapshot data, as shown in
Figure 10-79 on page 589:
– Name
– Storage Pool
– Capacity Savings (None, Compressed, or Thin-provisioned)
– I/O group
You are not prompted to enter the volume size because the new volume’s size is identical
to the snapshot copy volume.
Enter the settings for the new volume and click Next.

588 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-79 Restoring a snapshot to a new volume

6. A Summary window opens where you can double-check the restoration settings, as shown
in Figure 10-80. Click Finish. The system creates a volume or overwrites the selected
volume. The more recent snapshots (later versions) of the volume are deleted from the
cloud.

Figure 10-80 Restoring a snapshot summary

If you chose to restore the data from the cloud to a new volume, the new volume appears
immediately in the Volumes window. However, it is taken offline until all the data from the
snapshot is written. The new volume is independent. It is not defined as a target in a
FlashCopy mapping with the selected volume for example. It also is not mapped to a host.

Chapter 10. Advanced Copy Services 589


10.5 Volume mirroring and migration options
Volume mirroring is a simple redundant array of independent disks (RAID) 1 type function
that enables a volume to remain online even when the storage pool that is backing it becomes
inaccessible. Volume mirroring is designed to protect the volume from storage infrastructure
failures by seamless mirroring between storage pools.

Volume mirroring is provided by a specific volume mirroring function in the I/O stack. It cannot
be manipulated like a FlashCopy or other types of copy volumes. However, this feature
provides migration functions, which can be obtained by splitting the mirrored copy from the
source or by using the migrate to function. Volume mirroring cannot control back-end storage
mirroring or replication.

With volume mirroring, host I/O completes when both copies are written. This feature is
enhanced with a tunable latency tolerance. This tolerance provides an option to give
preference to losing the redundancy between the two copies. This tunable timeout value is
Latency or Redundancy.

The Latency tuning option, which is set with chvdisk -mirrorwritepriority latency, is the
default. It prioritizes host I/O latency, which yields a preference to host I/O over availability.
However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Run the chvdisk -mirrorwritepriority
redundancy command to set the Redundancy option.

Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.

Migration offers the following options:


򐂰 Export to Image mode.
By using this option, you can move storage from managed mode to image mode, which is
useful if you are using an IBM Spectrum Virtualized based system as a migration device.
For example, vendor A’s product cannot communicate with vendor B’s product, but you
must migrate data from vendor A to vendor B. By using Export to Image mode, you can
migrate data by using Copy Services functions and then return control to the native array
while maintaining access to the hosts.
򐂰 Import to Image mode.
By using this option, you can import a storage MDisk or logical unit number (LUN) with its
data from an external storage system without putting metadata on it so that the data
remains intact. After you import it, all Copy Services functions can be used to migrate the
storage to other locations while the data remains accessible to your hosts.
򐂰 Volume migration by using volume mirroring and then by using Split into New Volume.
By using this option, you can use the available RAID 1 functions. You create two copies of
data that initially has a set relationship (one volume with two copies, one primary and one
secondary) but then break the relationship (two volumes, both primary and no relationship
between them) to make them independent copies of data.
You can use this option to migrate data between storage pools and devices. You might use
this option if you want to move volumes to multiple storage pools. Each volume can have
two copies at a time, which means that you can add only one copy to the original volume,
and then you must split those copies to create another copy of the volume.

590 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Volume migration by using move to another pool.
By using this option, you can move any volume between storage pools without any
interruption to the host access. This option is a quicker version of the Volume Mirroring
and Split into New Volume option. You might use this option if you want to move volumes
in a single step or if you do not have a volume mirror copy.

Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your IBM Spectrum Virtualize system if you do not
already have them installed.

With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because with volume mirroring storage pools do not need to have the same extent
size, which is the case with volume migration.

Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume. Therefore, you end up
with one volume that is presented to the host with two copies of data that are connected to
this volume. Only splitting copies creates another volume, and then both volumes have
only one copy of the data.

Starting with Version 7.3 and the introduction of the new cache architecture, mirrored volume
performance was improved. Now, the lower cache is beneath the volume mirroring layer,
which means that both copies have their own cache.

This approach helps when you have copies of different types, for example, generic and
compressed, because now both copies use their independent cache and performs their own
read prefetch. Destaging of the cache can now be done independently for each copy, so one
copy does not affect performance of a second copy.

Also, because the IBM Spectrum Virtualize destage algorithm is MDisk aware, it can tune or
adapt the destaging process, depending on MDisk type and usage, for each copy
independently.

10.6 Remote Copy services


This section describes the Remote Copy services, which are a synchronous RC-called MM,
asynchronous RC-called GM, and GMCV. RC in an IBM Spectrum Virtualize system is like
RC in the IBM System Storage DS8000 family at a functional level, but the implementation
differs.

IBM Spectrum Virtualize provides a single point of control when RC is enabled in your
network (regardless of the disk subsystems that are used) if those disk subsystems are
supported by the system.

The general application of RC services is to maintain two real-time synchronized copies of a


volume. Often, the two copies are geographically dispersed between two IBM Spectrum
Virtualize systems. However, it is possible to use MM or GM within a single system (within an
I/O group). If the master copy fails, you can enable an auxiliary copy for I/O operations.

Chapter 10. Advanced Copy Services 591


Tips: Intracluster MM/GM uses more resources within the system compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems. Use intercluster MM/GM when possible. For mirroring volumes in the same
system, it is better to use volume mirroring or the FlashCopy feature.

A typical application of this function is to set up a dual-site solution that uses two
IBM Spectrum Virtualize based systems of the same type. The first site is considered the
primary site or production site, and the second site is considered the backup site or failover
site. The failover site is activated when a failure at the first site is detected.
When MM or GM are used, a certain amount of bandwidth is required for the system
intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each
of the two clustered systems.

Table 10-10 lists the amount of heartbeat traffic (in megabits per second (Mbps)) that is
generated by various sizes of clustered systems.

Table 10-10 Intersystem heartbeat traffic in Mbps


IBM Spectrum IBM Spectrum Virtualize system 2
Virtualize
system 1

2 nodes 4 nodes 6 nodes 8 nodes

2 nodes 5 6 6 6

4 nodes 6 10 11 12

6 nodes 6 11 16 17

8 nodes 6 12 17 21

10.6.1 IBM FlashSystem 9100 system layers


An IBM FlashSystem 9100 family system can be in one of two layers: the replication layer or
the storage layer. The system layer affects how the system interacts with IBM SAN Volume
Controller and other external Storwize and IBM FlashSystem 9100 family systems.

In the storage layer, a IBM FlashSystem 9100 family system has the following characteristics
and requirements:
򐂰 The system can perform MM and GM replication with other storage-layer systems.
򐂰 The system can provide external storage for replication-layer systems.
򐂰 The system cannot use a storage-layer system as external storage.

In the replication layer, a system has the following characteristics and requirements:
򐂰 The system can perform MM and GM replication with other replication-layer systems.
򐂰 The system cannot provide external storage for a replication-layer system.
򐂰 The system can use a storage-layer system as external storage.

A IBM FlashSystem 9100 family system is in the storage layer by default, but the layer can be
changed. For example, you might want to change a system to a replication layer if you want to
virtualize other systems or replicate to an SAN Volume Controller system.

592 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Before you change the system layer, the following conditions must be met on the
system at the time of layer change:
򐂰 No other Storwize or IBM FlashSystem 9100 system can exist as a back end or host
entity.
򐂰 No system partnerships can exist.
򐂰 No Storwize or IBM FlashSystem 9100 system can be visible on the SAN fabric.

The layer can be changed during normal host I/O.

In your system, run the lssystem command to check the current system layer, as shown in
Example 10-2.

Example 10-2 Output from the lssystem command showing the system layer
IBM_Storwize:ITSO-V7k:superuser>lssystem
id 0000010029C056C2
name ITSO-V7k
location local
...
layer storage
rc_buffer_size 48
...
IBM_Storwize:ITSO-V7k:superuser>

Note: Consider the following rules for creating remote partnerships between
IBM Spectrum Virtualize based systems:
򐂰 An SAN Volume Controller is always in the replication layer.
򐂰 By default, the IBM Storwize and IBM FlashSystem 9100 systems are in the storage
layer, but can be changed to the replication layer.
򐂰 A system can form partnerships only with systems in the same layer.

10.6.2 Multiple IBM Spectrum Virtualize systems replication


Each IBM Spectrum Virtualize system can maintain up to three partner system relationships,
which enables as many as four systems to be directly associated with each other. This
system partnership capability enables the implementation of disaster recovery (DR) solutions.

Note: For more information about the restrictions and limitations of native Internet Protocol
(IP) replication, see 10.8.2, “IP partnership limitations” on page 629.

Chapter 10. Advanced Copy Services 593


Figure 10-81 shows an example of a multiple-system mirroring configuration.

Figure 10-81 Multiple-system mirroring configuration example

Supported multiple-system mirroring topologies


Multiple-system mirroring supports various partnership topologies, as shown in Figure 10-82.
This example is a star topology (A → B, A → C, and A → D).

Figure 10-82 Star topology

Figure 10-82 shows four systems in a star topology, with System A at the center. System A
can be a central DR site for the three other locations.

By using a star topology, you can migrate applications by using a process, such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.

594 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Create the A → C relationship (or the B → C relationship).
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C

Figure 10-83 shows an example of a triangle topology (A → B, A → C, and B → C).

Figure 10-83 Triangle topology

Figure 10-84 shows an example of a fully connected topology (A → B, A → C, A → D, B → D,


and C → D).

Figure 10-84 Fully connected topology

Figure 10-84 is a fully connected mesh in which every system has a partnership to each of
the three other systems. This topology enables volumes to be replicated between any pair of
systems, for example A → B, A → C, and B → C.

Figure 10-85 shows a daisy-chain topology.

Figure 10-85 Daisy-chain topology

Although systems can have up to three partnerships, volumes can be part of only one RC
relationship, for example A → B.

System partnership intermix: All of the preceding topologies are valid for the intermix of
Storwize and IBM FlashSystem 9100 systems if they are configured to the same layer.

Chapter 10. Advanced Copy Services 595


10.6.3 Importance of write ordering
Many applications that use block storage have a requirement to survive failures, such as loss
of power or a software crash, and to not lose data that existed before the failure. Because
many applications must perform many update operations in parallel, maintaining write
ordering is key to ensuring the correct operation of applications after a disruption.

An application that performs many database updates is designed with the concept of
dependent writes. With dependent writes, you must ensure that an earlier write completed
before a later write is started. Reversing or performing the order of writes differently than the
application intended can undermine the application’s algorithms and lead to problems, such
as detected or undetected data corruption.

The IBM Spectrum Virtualize MM and GM implementation operates in a manner that is


designed to always keep a consistent image at the secondary site. The GM implementation
uses complex algorithms that identify sets of data, and number those sets of data in
sequence. Then, the data is applied at the secondary site in the defined sequence.

Operating in this manner ensures that if the relationship is in a Consistent_Synchronized


state, the GM target data is at least crash-consistent, and supports quick recovery through
application crash recovery facilities.

For more information about dependent writes, see 10.1.13, “FlashCopy and image mode
volumes” on page 529.

Remote Copy consistency groups


A Remote Copy consistency group can contain an arbitrary number of relationships up to the
maximum number of MM/GM relationships that is supported by the IBM Spectrum Virtualize
system. MM/GM commands can be issued to a Remote Copy consistency group.

Therefore, these commands can be issued simultaneously for all MM/GM relationships that
are defined within that consistency group, or to a single MM/GM relationship that is not part of
a Remote Copy consistency group. For example, when a startrcconsistgrp command is
issued to the consistency group, all of the MM/GM relationships in the consistency group are
started concurrently.

Figure 10-86 shows the concept of Remote Copy consistency groups.

Figure 10-86 Remote Copy consistency groups

Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy consistency groups can group relationships so that they are manipulated in unison.

596 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Consider the following points:
򐂰 MM/GM relationships can be part of a consistency group, or they can be stand-alone and
handled as single instances.
򐂰 A consistency group can contain zero or more relationships. An empty consistency group
with zero relationships in it has little purpose until it is assigned its first relationship, except
that it has a name.
򐂰 All relationships in a consistency group must have corresponding master and auxiliary
volumes.
򐂰 All relationships in one consistency group must be the same type, for example, only MM or
only GM.

Although consistency groups can be used to manipulate sets of relationships that do not need
to satisfy these strict rules, this manipulation can lead to unwanted side effects. The rules
behind a consistency group mean that certain configuration commands are prohibited. These
configuration commands are not prohibited if the relationship is not part of a consistency
group.

For example, consider the case of two applications that are independent yet placed into a
single consistency group. If an error occurs, synchronization is lost and a background copy
process is required to recover synchronization. While this process is progressing, MM/GM
rejects attempts to enable access to the auxiliary volumes of either application.

If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes even though it is safe in this case.
The MM/GM policy is to refuse access to the entire consistency group if any part of it is
inconsistent. Stand-alone relationships and consistency groups share a common
configuration and state model. All the relationships in a non-empty consistency group have
the same state as the consistency group.

10.6.4 Remote Copy intercluster communication


With traditional Fibre Channel (FC), the intercluster communication between systems in a
MM/GM partnership is performed over the SAN. This section describes this communication
path.

For more information about intercluster communication between systems in an IP


partnership, see 10.8.6, “States of IP partnership” on page 633.

Zoning
At least two FC ports of every node of each system must communicate with each other to
create the partnership. Switch zoning is critical to facilitate intercluster communication.

Intercluster communication channels


When an IBM Spectrum Virtualize system partnership is defined on a pair of systems, the
following intercluster communication channels are established:
򐂰 A single control channel, which is used to exchange and coordinate configuration
information
򐂰 I/O channels between each of these nodes in the systems

These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the systems is interrupted or lost, an event is logged (and the MM/GM relationships stop).

Chapter 10. Advanced Copy Services 597


Alerts: You can configure the system to raise SNMP traps to the enterprise monitoring
system to alert on events that indicate an interruption in internode communication
occurred.

Intercluster links
All IBM FlashSystem 9100 nodes maintain a database of other devices that are visible on the
fabric. This database is updated as devices appear and disappear.

Devices that advertise themselves as Storwize V7000 nodes, IBM FlashSystem 9100
systems, or SAN Volume Controllers are categorized according to the system to which they
belong. Nodes that belong to the same system establish communication channels between
themselves and exchange messages to implement clustering and the functional protocols of
IBM Spectrum Virtualize.

Nodes that are in separate systems do not exchange messages after initial discovery is
complete unless they are configured together to perform an RC relationship.

The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.

If the designated node fails (or all its logins to the remote system fail), a new node is chosen
to carry control traffic. This node change causes the I/O to pause, but it does not put the
relationships in a ConsistentStopped state.

Note: Run chsystem -partnerfcportmask to dedicate at least two FC ports per node only
to system-to-system traffic to ensure that RC is not affected by other traffic, such as
host-to-node traffic or node-to-node traffic within the same system.

10.6.5 Metro Mirror overview


MM establishes a synchronous relationship between two volumes of equal size. The volumes
in an MM relationship are referred to as the master (primary) volume and the auxiliary
(secondary) volume. Traditional FC MM is primarily used in a metropolitan area or
geographical area, up to a maximum distance of 300 km (186.4 miles) to provide
synchronous replication of data.

With synchronous copies, host applications write to the master volume, but they do not
receive confirmation that the write operation completed until the data is written to the auxiliary
volume. This action ensures that both the volumes have identical data when the copy
completes. After the initial copy completes, the MM function always maintains a fully
synchronized copy of the source data at the target site.

MM has the following characteristics:


򐂰 Zero recovery point objective (RPO)
򐂰 Synchronous
򐂰 Production application performance that is affected by round-trip latency

Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your MM auxiliary
location.

598 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Consistency groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy consistency groups.

IBM Spectrum Virtualize provides intracluster and intercluster MM.

Intracluster Metro Mirror


Intracluster MM performs the intracluster copying of a volume in which both volumes belong
to the same system and I/O group within the system. Because it is within the same I/O group,
there must be sufficient bitmap space within the I/O group for both sets of volumes and
licensing on the system.

Important: Performing MM across I/O groups within a system is not supported.

Intercluster Metro Mirror


Intercluster MM performs intercluster copying of a volume in which one volume belongs to a
system and the other volume belongs to a separate system.

Two IBM Spectrum Virtualize systems must be defined in a partnership, which must be
performed on both systems to establish a fully functional MM partnership.

By using standard single-mode connections, the supported distance between two systems in
an MM partnership is 10 km (6.2 miles), although greater distances can be achieved by using
extenders. For extended distance solutions, contact your IBM representative.

Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.

10.6.6 Synchronous Remote Copy


MM is a fully synchronous RC technique that ensures that writes are committed at both the
master and auxiliary volumes before write completion is acknowledged to the host, but only if
writes to the auxiliary volumes are possible.

Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, MM suspends writes to the
auxiliary volume and enables I/O to the master volume to continue to avoid affecting the
operation of the master volumes.

Chapter 10. Advanced Copy Services 599


Figure 10-87 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.

Figure 10-87 Write on volume in a Metro Mirror relationship

However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, traditional FC MM has distance limitations that are based
on your performance requirements. IBM Spectrum Virtualize does not support more than
300 km (186.4 miles).

10.6.7 Metro Mirror features


The IBM Spectrum Virtualize MM function supports the following features:
򐂰 Synchronous RC of volumes that are dispersed over metropolitan distances.
򐂰 The MM relationships between volume pairs, with each volume in a pair that is managed
by a Storwize or a IBM FlashSystem 9100 system.
򐂰 Supports intracluster MM where both volumes belong to the same system (and I/O group).
򐂰 IBM Spectrum Virtualize supports intercluster MM where each volume belongs to a
separate system. You can configure a specific system for partnership with another system.
All intercluster MM processing occurs between two IBM Spectrum Virtualize systems that
are configured in a partnership.
򐂰 Intercluster and intracluster MM can be used concurrently.

600 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 IBM Spectrum Virtualize does not require that a control network or fabric is installed to
manage MM. For intercluster MM, the system maintains a control link between two
systems. This control link is used to control the state and coordinate updates at either end.
The control link is implemented on top of the same FC fabric connection that the system
uses for MM I/O.
򐂰 IBM Spectrum Virtualize implements a configuration model that maintains the MM
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.

IBM Spectrum Virtualize supports the resynchronization of changed data so that write failures
that occur on the master or auxiliary volumes do not require a complete resynchronization of
the relationship.

10.6.8 Metro Mirror attributes


The MM function in IBM Spectrum Virtualize possesses the following attributes:
򐂰 A partnership is created between two IBM Spectrum Virtualize systems operating in the
same layer, storage, or replication (for intercluster MM).
򐂰 An MM relationship is created between two volumes of the same size.
򐂰 To manage multiple MM relationships as one entity, relationships can be made part of an
MM consistency group, which ensures data consistency across multiple MM relationships
and provides ease of management.
򐂰 When an MM relationship is started and when the background copy completes, the
relationship becomes consistent and synchronized.
򐂰 After the relationship is synchronized, the auxiliary volume holds a copy of the production
data at the primary, which can be used for DR.
򐂰 The auxiliary volume is in read-only mode when the relationship is active.
򐂰 To access the auxiliary volume, the MM relationship must be stopped with the access
option that was enabled before write I/O is allowed to access the auxiliary.
򐂰 The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

10.6.9 Practical use of Metro Mirror


The master volume is the production volume, and updates to this copy are mirrored in real
time to the auxiliary volume. The contents of the auxiliary volume that existed when the
relationship was created are deleted.

Switching copy direction: The copy direction for an MM relationship can be switched so
that the auxiliary volume becomes the master, and the master volume becomes the
auxiliary, which is similar to the FlashCopy restore option. However, although the
FlashCopy target volume can operate in read/write mode, the target volume of the started
RC is always in read-only mode.

While the MM relationship is active, the auxiliary volume is not accessible for host application
write I/O at any time. The IBM Spectrum Virtualize allows read-only access to the auxiliary
volume when it contains a consistent image. IBM Storwize allows boot time OS discovery to
complete without an error so that any hosts at the secondary site can be ready to start the
applications with minimum delay, if required.

Chapter 10. Advanced Copy Services 601


For example, many OSs must read logical block address (LBA) zero to configure a logical unit
(LU). Although read access is allowed at the auxiliary in practice, the data on the auxiliary
volumes cannot be read by a host because most OSs write a “dirty bit” to the file system
when it is mounted. Because this write operation is not allowed on the auxiliary volume, the
volume cannot be mounted.

Access is provided only where consistency can be ensured. However, coherency cannot be
maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.

To enable access to the auxiliary volume for host operations, you must stop the MM
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.

For example, the MM requirement to enable the auxiliary copy for access differentiates it from
third-party mirroring software on the host, which aims to emulate a single, reliable disk
regardless of what system is accessing it. MM retains the property that there are two volumes
in existence, but it suppresses one volume while the copy is being maintained.

Using an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required, and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. IBM Spectrum
Virtualize provides SNMP traps and programming (or scripting) for the CLI to enable this
automation.

10.6.10 Global Mirror overview


This section describes the GM copy service, which is an asynchronous RC service. This
service provides and maintains a consistent mirrored copy of a source volume to a target
volume.

This GM function establishes a GM relationship between two volumes of equal size. The
volumes in a GM relationship are referred to as the master (source) volume and the auxiliary
(target) volume, which is the same as MM. Consistency groups can be used to maintain data
integrity for dependent writes, which is like FlashCopy consistency groups.

GM writes data to the auxiliary volume asynchronously, which means that host writes to the
master volume provide the host with confirmation that the write is complete before the I/O
completes on the auxiliary volume.

GM has the following characteristics:


򐂰 Near-zero RPO
򐂰 Asynchronous
򐂰 Production application performance that is affected by I/O sequencing preparation time

602 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Intracluster Global Mirror
Although GM is available for intracluster communication, it has no functional value for
production use. Intracluster MM provides the same capability with less processor use.
However, leaving this function in place simplifies testing and supports client experimentation
and testing (for example, to validate server failover on a single test system). As with
intracluster MM, you must consider the increase in the license requirements because the
source and target exist on the same IBM Spectrum Virtualize system.

Intercluster Global Mirror


Intercluster GM operations require a pair of IBM Spectrum Virtualize systems that are
connected by several intercluster links. The two systems must be defined in a partnership to
establish a fully functional GM relationship.

Limit: When a local fabric and a remote fabric are connected for GM purposes, the ISL
hop count between a local node and a remote node must not exceed seven hops.

10.6.11 Asynchronous Remote Copy


GM is an asynchronous RC technique. In asynchronous RC, the write operations are
completed on the primary site and the write acknowledgment is sent to the host before it is
received at the secondary site. An update of this write operation is sent to the secondary site
at a later stage, which provides the capability to perform RC over distances that exceed the
limitations of synchronous RC.

The GM function provides the same function as MM RC, but over long-distance links with
higher latency without requiring the hosts to wait for the full round-trip delay of the
long-distance link.

Figure 10-88 shows that a write operation to the master volume is acknowledged back to the
host that is issuing the write before the write operation is mirrored to the cache for the
auxiliary volume.

Figure 10-88 Global Mirror write sequence

Chapter 10. Advanced Copy Services 603


The GM algorithms maintain a consistent image on the auxiliary by identifying sets of I/Os
that are active concurrently at the master, assigning an order to those sets, and applying
those sets of I/Os in the assigned order at the secondary. As a result, GM maintains the
features of Write Ordering and Read Stability.

The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.

GM write I/O from production system to a secondary system requires serialization and
sequence-tagging before being sent across the network to a remote site (to maintain a
write-order consistent copy of data).

To avoid affecting the production site, IBM Spectrum Virtualize supports more parallelism in
processing and managing GM writes on the secondary system by using the following
methods:
򐂰 Secondary system nodes store replication writes in new redundant non-volatile cache.
򐂰 Cache content details are shared between nodes.
򐂰 Cache content details are batched together to make node-to-node latency less of an
issue.
򐂰 Nodes intelligently apply these batches in parallel as soon as possible.
򐂰 Nodes internally manage and optimize GM secondary write I/O processing.

In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, such as a transaction log replay.

GM is supported over FC, Fibre Channel over IP (FCIP), Fibre Channel over Ethernet
(FCoE), and native IP connections. The maximum distance cannot exceed 250 ms round trip,
which is about 4000 km (2485.48 miles) between mirrored systems. Figure 10-89 shows the
current supported round-trip times for GM RC.

Figure 10-89 Supported Global Mirror round-trip times

10.6.12 Global Mirror features


IBM Spectrum Virtualize GM supports the following features:
򐂰 Asynchronous RC of volumes that are dispersed over metropolitan-scale distances.
򐂰 IBM Spectrum Virtualize implements the GM relationship between a volume pair, with
each volume in the pair being managed by an IBM Spectrum Virtualize system.
򐂰 IBM Spectrum Virtualize supports intracluster GM where both volumes belong to the same
system (and I/O group).

604 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 An IBM Spectrum Virtualize system can be configured for partnership with 1 - 3 other
systems. For more information about IP partnership restrictions, see 10.8.2, “IP
partnership limitations” on page 629.
򐂰 Intercluster and intracluster GM can be used concurrently, but not for the same volume.
򐂰 IBM Spectrum Virtualize does not require a control network or fabric to be installed to
manage GM. For intercluster GM, the system maintains a control link between the two
systems. This control link is used to control the state and coordinate the updates at either
end. The control link is implemented on top of the same FC fabric connection that the
system uses for GM I/O.
򐂰 IBM Spectrum Virtualize implements a configuration model that maintains the GM
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
򐂰 IBM Spectrum Virtualize implements flexible resynchronization support, enabling it to
resynchronize volume pairs that experienced write I/Os to both disks and resynchronize
only those regions that changed.
򐂰 An optional feature for GM is a delay simulation to be applied on writes that are sent to
auxiliary volumes. It is useful in intracluster scenarios for testing purposes.

Colliding writes
The GM algorithm requires that only a single write is active on a volume. I/Os that overlap an
active IO are sequential, which is called colliding writes. If another write is received from a
host while the auxiliary write is still active, the new host write is delayed until the auxiliary
write is complete. This rule is needed if a series of writes to the auxiliary must be tried again,
which is called reconstruction. Conceptually, the data for reconstruction comes from the
master volume.

If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such
write activity do not achieve the performance that GM is intended to support. A volume
statistic is maintained about the frequency of these collisions.

An attempt is made to allow multiple writes to a single location to be outstanding in the GM


algorithm. There is still a need for master writes to be sequential, and the intermediate states
of the master data must be kept in a non-volatile journal while the writes are outstanding to
maintain the correct write ordering during reconstruction. Reconstruction must never
overwrite data on the auxiliary with an earlier version. The volume statistic that is monitoring
colliding writes is now limited to those writes that are not affected by this change.

Chapter 10. Advanced Copy Services 605


Figure 10-90 shows a colliding write sequence example.

Figure 10-90 Colliding writes example

The following numbers correspond to the numbers that are shown in Figure 10-90:
򐂰 (1): The first write is performed from the host to LBA X.
򐂰 (2): The host receives acknowledgment that the write completed even though the mirrored
write to the auxiliary volume is not yet complete.
򐂰 (1’) and (2’) occur asynchronously with the first write.
򐂰 (3): The second write is performed from the host to LBA X. If this write occurs before (2’),
the write is written to the journal file.
򐂰 (4): The host receives acknowledgment that the second write is complete.

Delay simulation
GM provides a feature that enables a delay simulation to be applied on writes that are sent to
the auxiliary volumes. With this feature, tests can be done to detect colliding writes. Also, it
can test an application before full deployment. The feature can be enabled separately for
each of the intracluster or intercluster GMs.

By running the chsystem command, the delay setting can be set up. By running the lssystem
command, the delay can be checked. The gm_intra_cluster_delay_simulation field
expresses the amount of time that intracluster auxiliary I/Os are delayed. The
gm_inter_cluster_delay_simulation field expresses the amount of time that intercluster
auxiliary I/Os are delayed. A value of zero disables the feature.

Tip: If you are experiencing repeated problems with the delay on your link, make sure that
the delay simulator was properly disabled.

10.6.13 Using GMCV


GM is designed to achieve an RPO as low as possible so that data is as up-to-date as
possible. This design places several strict requirements on your infrastructure. In certain
situations with low network link quality, congested hosts, or overloaded hosts, you might be
affected by multiple 1920 congestion errors.

606 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Congestion errors happen in the following primary situations:
򐂰 At the source site through the host or network
򐂰 In the network link or network path
򐂰 At the target site through the host or network

GM has functions that are designed to address the following conditions, which might
negatively affect certain GM implementations:
򐂰 The estimation of the bandwidth requirements tends to be complex.
򐂰 Ensuring that the latency and bandwidth requirements can be met is often difficult.
򐂰 Congested hosts on the source or target site can cause disruption.
򐂰 Congested network links can cause disruption with only intermittent peaks.

To address these issues, Change Volumes were added as an option for GM relationships.
Change Volumes use the FlashCopy function, but they cannot be manipulated as FlashCopy
volumes because they are for a special purpose only. Change Volumes replicate PiT images
on a cycling period. The default is 300 seconds.

Your change rate must include only the condition of the data at the PiT that the image was
taken instead of all updates during that period. This function can provide significant
reductions in replication volume.

GMCV has the following characteristics:


򐂰 Larger RPO.
򐂰 PiT copies.
򐂰 Asynchronous.
򐂰 Possible system performance resource requirements because PiT copies are created
locally.

Figure 10-91 shows a simple GM relationship without Change Volumes.

Figure 10-91 Global Mirror without Change Volumes

Chapter 10. Advanced Copy Services 607


With Change Volumes, this environment looks as it is shown in Figure 10-92.

Figure 10-92 Global Mirror with Change Volumes

With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary change volume. The mapping is updated in the cycling period (60 seconds to one
day). The primary change volume is then replicated to the secondary GM volume at the target
site, which is then captured in another change volume on the target site. This approach
provides an always consistent image at the target site and protects your data from being
inconsistent during resynchronization.

For more information about FlashCopy, see 10.1, “IBM FlashCopy” on page 506.

You can adjust the cycling period by running the chrcrelationship -cycleperiodseconds
<60 - 86400> command. The default value is 300 seconds. If a copy does not complete in the
cycle period, the next cycle does not start until the prior cycle completes. For this reason,
using Change Volumes provides the following possibilities for RPO:
򐂰 If your replication completes in the cycling period, your RPO is twice the cycling period.
򐂰 If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.

Carefully consider your business requirements versus the performance of GMCV. GMCV
increases the intercluster traffic for more frequent cycling periods. Therefore, selecting the
shortest cycle periods possible is not always the answer. In most cases, the default must
meet requirements and perform well.

Important: When you create your GMCV volumes, make sure that you select the change
volume on the auxiliary (target) site. Failure to do so leaves you exposed during a
resynchronization operation.

10.6.14 Distribution of work among nodes


For the best performance, MM/GM volumes must have their preferred nodes evenly
distributed among the nodes of the systems. Each volume within an I/O group has a preferred
node property that can be used to balance the I/O load between nodes in that group. MM/GM
also uses this property to route I/O between systems.

608 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If this best practice is not maintained, such as if source volumes are assigned to only one
node in the I/O group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. You can also change the preferred node for volumes that
are in an RC relationship without affecting the host I/O to a particular volume.

The RC relationship type does not matter. The RC relationship type can be MM, GM, or
GMCV. You can change the preferred node both to the source and target volumes that are
participating in the RC relationship.

10.6.15 Background copy performance


Background copy performance is subject to sufficient RAID controller bandwidth.
Performance is also subject to other potential bottlenecks, such as the intercluster fabric, and
possible contention from host I/O for the IBM Spectrum Virtualize system bandwidth
resources.

Background copy I/O is scheduled to avoid bursts of activity that might have an adverse effect
on system behavior. An entire grain of tracks on one volume is processed at around the same
time, but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for some
time. Multiple grains might be copied simultaneously and be enough to satisfy the requested
rate unless the available resources cannot sustain the requested rate.

GM paces the rate at which a background copy is performed by the appropriate relationships.
A background copy occurs on relationships that are in the InconsistentCopying state with a
status of Online.

The quota of a background copy (configured on the intercluster link) is divided evenly
between all nodes that are performing a background copy for one of the eligible relationships.
This allocation is made regardless of the number of disks for which the node is responsible.
Each node in turn divides its allocation evenly between the multiple relationships that are
performing a background copy.

The default value of the background copy is 25 megabytes per second (MBps) per volume.

Important: The background copy value is a system-wide parameter that can be changed
dynamically, but only on a per-system basis and not on a per-relationship basis. Therefore,
the copy rate of all relationships changes when this value is increased or decreased. In
systems with many RC relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed to 1 - 1000 MBps.

10.6.16 Thin-provisioned background copy


MM/GM relationships preserve the space-efficiency of the master. Conceptually, the
background copy process detects a deallocated region of the master and sends a special
zero buffer to the auxiliary.

If the auxiliary volume is thin-provisioned and the region is deallocated, the special buffer
prevents a write and an allocation. If the auxiliary volume is not thin-provisioned or the region
in question is an allocated region of a thin-provisioned volume, a buffer of “real” zeros is
synthesized on the auxiliary and written as normal.

Chapter 10. Advanced Copy Services 609


10.6.17 Methods of synchronization
This section describes two methods that can be used to establish a synchronized
relationship.

Full synchronization after creation


The full synchronization after creation method is the default method. It is the simplest method
because it requires no administrative activity apart from running the necessary commands.
However, in certain environments, the available bandwidth can make this method unsuitable.

Run the following command sequence for a single relationship:


1. Run mkrcrelationship without specifying the -sync option.
2. Run startrcrelationship without specifying the -clean option.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain
identical data before creating the relationship by using the following technique:
򐂰 Both disks are created with the security delete feature to make all data zero.
򐂰 A complete tape image (or other method of moving data) is copied from one disk to the
other disk.

With this technique, do not allow I/O on the master or auxiliary before the relationship is
established. Then, the administrator must run the following commands:
1. Run mkrcrelationship with the -sync flag.
2. Run startrcrelationship without the -clean flag.

Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, possibly causing a data loss or data integrity
exposure for hosts accessing data on the auxiliary volume.

10.6.18 Practical use of Global Mirror


The practical use of GM is similar to MM, as described in 10.6.9, “Practical use of Metro
Mirror” on page 601. The main difference between the two RC modes is that GM and GMCV
are mostly used on much larger distances than MM. Weak link quality or insufficient
bandwidth between the primary and secondary sites can also be a reason to prefer
asynchronous GM over synchronous MM. Otherwise, the use cases for MM/GM are the
same.

10.6.19 IBM Spectrum Virtualize HyperSwap topology


The HyperSwap topology is based on IBM Spectrum Virtualize Remote Copy mechanisms. It
is also referred to as an “active-active relationship” in this document.

You can create an HyperSwap topology system configuration where each I/O group in the
system is physically on a different site. These configurations can be used to maintain access
to data on the system when power failures or site-wide outages occur.

610 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
In a HyperSwap configuration, each site is defined as an independent failure domain. If one
site experiences a failure, then the other site can continue to operate without disruption. You
must also configure a third site to host a quorum device or IP quorum application that
provides an automatic tie-break in case of a link failure between the two main sites. The main
site can be in the same room or across rooms in the data center, buildings on the same
campus, or buildings in different cities. Different kinds of sites protect against different types of
failures.

For more information about HyperSwap implementation and best practices, see IBM Storwize
V7000, Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317.

10.6.20 Consistency Protection for Global Mirror and Metro Mirror


MM, GM, GMCV, and HyperSwap Copy Services functions create RC or remote replication
relationships between volumes or consistency groups. If the secondary volume in a Copy
Services relationship becomes unavailable to the primary volume, the system maintains the
relationship. However, the data might become out of sync when the secondary volume
becomes available.

Since Version 7.8, it is possible to create a FlashCopy mapping (change volume) for an RC
target volume to maintain a consistent image of the secondary volume. The system
recognizes it as a Consistency Protection, and a link failure or an offline secondary volume
event is handled differently now.

When Consistency Protection is configured, the relationship between the primary and
secondary volumes does not stop if the link goes down or the secondary volume is offline.
The relationship does not go into the consistent stopped status. Instead, the system uses
the secondary change volume to automatically copy the previous consistent state of the
secondary volume. The relationship automatically moves to the consistent copying status
as the system resynchronizes and protects the consistency of the data.

The relationship status changes to consistent synchronized when the resynchronization


process completes. The relationship automatically resumes replication after the temporary
loss of connectivity.

Change Volumes that is used for Consistency Protection are not visible and manageable from
the GUI because they are used for Consistency Protection internal behavior only.

You do not need to configure a secondary change volume on a MM/GM (without cycling)
relationship. However, if the link goes down or the secondary volume is offline, the
relationship goes in to the consistent stopped status. If write operations take place on either
the primary or secondary volume, the data is no longer synchronized (out of sync).

Consistency Protection must be enabled on all relationships in a consistency group. Every


relationship in a consistency group must be configured with a secondary change volume. If a
secondary change volume is not configured on one relationship, the entire consistency group
stops with a 1720 error if host I/O is processed when the link is down or any secondary
volume in the consistency group is offline. All relationships in the consistency group cannot
retain a consistent copy during resynchronization.

The option to add Consistency Protection is selected by default when MM/GM relationships
are created. The option must be cleared to create MM/GM relationships without Consistency
Protection.

Chapter 10. Advanced Copy Services 611


10.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror
Table 10-11 lists the valid combinations of FlashCopy and MM/GM functions for a single
volume.

Table 10-11 Valid combinations for a single volume


FlashCopy Metro Mirror or Global Mirror Metro Mirror or Global Mirror
source target

FlashCopy source Supported Supported

FlashCopy target Supported Not supported

10.6.22 Remote Copy configuration limits


Table 10-12 lists the MM/GM configuration limits.

Table 10-12 Metro Mirror configuration limits


Parameter Value

Number of Metro Mirror or GM 256.


consistency groups per system

Number of Metro Mirror or GM 10000.


relationships per system

Number of Metro Mirror or GM 10000.


relationships per consistency group

Total Volume size per I/O group There is a per I/O group limit of 1024 TB on the quantity
of master and auxiliary volume address spaces that can
participate in MM and GM relationships. This maximum
configuration uses all 512 MiB of bitmap space for the I/O
group and allows 10 MiB of space for all remaining copy
services features.

10.6.23 Remote Copy states and events


This section describes the various states of a MM/GM relationship and the conditions that
cause them to change. In Figure 10-93 on page 613, the MM/GM relationship diagram shows
an overview of the statuses that can apply to a MM/GM relationship in a connected state.

612 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-93 Metro Mirror or Global Mirror mapping state diagram

When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume so that the background copy process is skipped. This
capability is useful when MM/GM relationships are established for volumes that were created
with the format option.

Here are the steps that are shown in Figure 10-93:


򐂰 Step 1:
a. The MM/GM relationship is created with the -sync option, and the MM/GM relationship
enters the ConsistentStopped state.
b. The MM/GM relationship is created without specifying that the master and auxiliary
volumes are in sync, and the MM/GM relationship enters the InconsistentStopped
state.
򐂰 Step 2:
a. When an MM/GM relationship is started in the ConsistentStopped state, the MM/GM
relationship enters the ConsistentSynchronized state. Therefore, no updates (write
I/O) were performed on the master volume while in the ConsistentStopped state.
Otherwise, the -force option must be specified, and then the MM/GM relationship
enters the InconsistentCopying state while the background copy is started.
b. When an MM/GM relationship is started in the InconsistentStopped state, the MM/GM
relationship enters the InconsistentCopying state while the background copy is
started.
򐂰 Step 3:
When the background copy completes, the MM/GM relationship changes from the
InconsistentCopying state to the ConsistentSynchronized state.

Chapter 10. Advanced Copy Services 613


򐂰 Step 4:
a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the
MM/GM relationship enters the Idling state when you specify the -access option,
which enables write I/O on the auxiliary volume.
b. When an MM/GM relationship is stopped in the ConsistentSynchronized state without
an -access parameter, the auxiliary volumes remain read-only and the state of the
relationship changes to ConsistentStopped.
c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the
ConsistentStopped state, run the svctask stoprcrelationship command, which
specifies the -access option, and the MM/GM relationship enters the Idling state.
򐂰 Step 5:
a. When an MM/GM relationship is started from the Idling state, you must specify the
-primary argument to set the copy direction. If no write I/O was performed (to the
master or auxiliary volume) while in the Idling state, the MM/GM relationship enters
the ConsistentSynchronized state.
b. If write I/O was performed to the master or auxiliary volume, the -force option must be
specified, and then the MM/GM relationship enters the InconsistentCopying state
while the background copy is started. The background process copies only the data
that changed on the primary volume while the relationship was stopped.

Stop on Error
When a MM/GM relationship is stopped (intentionally or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.

If the connection is broken between the two systems that are in a partnership, all (intercluster)
MM/GM relationships enter a Disconnected state. For more information, see “Connected
versus disconnected” on page 614.

Common states: Stand-alone relationships and consistency groups share a common


configuration and state model. All MM/GM relationships in a consistency group have the
same state as the consistency group.

State overview
The following sections provide an overview of the various MM/GM states.

Connected versus disconnected


Under certain error scenarios (for example, a power failure at one site that causes one
complete system to disappear), communications between two systems in an MM/GM
relationship can be lost. Alternatively, the fabric connection between the two systems might
fail, which leaves the two systems that are running but cannot communicate with each other.

When the two systems can communicate, the systems and the relationships that span them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.

In this state, both systems are left with fragmented relationships and are limited regarding the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.

614 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when it became disconnected, or it can enter a new state.

Relationships that are configured between volumes in the same system (intracluster) are
never described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain volumes that are operating as secondaries can be described as
being consistent or inconsistent. Consistency groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the auxiliary to the data on the master volume. It can
be considered a property of the auxiliary volume.

An auxiliary volume is described as consistent if it contains data that can be read by a host
system from the master if power failed at an imaginary point while I/O was in progress and
power was later restored. This imaginary point is defined as the recovery point.

The requirements for consistency are expressed regarding activity at the master up to the
recovery point. The auxiliary volume contains the data from all of the writes to the master for
which the host received successful completion and that data was not overwritten by a
subsequent write (before the recovery point).

Consider writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all). If the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from
the master.

From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred). If an application is designed to cope with an unexpected power
failure, this assurance of consistency means that the application can use the auxiliary and
begin operation as though it was restarted after the hypothetical power failure. Again,
maintaining the application write ordering is the key property of consistency.

For more information about dependent writes, see 10.1.13, “FlashCopy and image mode
volumes” on page 529.

If a relationship (or set of relationships) is inconsistent and an attempt is made to start an


application by using the data in the secondaries, the following outcomes are possible:
򐂰 The application might decide that the data is corrupted and crash or exit with an event
code.
򐂰 The application might fail to detect that the data is corrupted and return erroneous data.
򐂰 The application might work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a


consistency group. Write ordering is a concept that an application can maintain across
several disks that are accessed through multiple systems. Therefore, consistency must
operate across all of those disks.

Chapter 10. Advanced Copy Services 615


When you are deciding how to use consistency groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.

If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
򐂰 All of the data that is accessed by the group of systems must be placed into a single
consistency group.
򐂰 The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the master and auxiliary volumes differ only in regions where writes are
outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.

When communication is lost for an extended period and Consistency Protection was not
enabled, MM/GM tracks the changes that occurred on the master, but not the order or the
details of such changes (write data). When communication is restored, it is impossible to
synchronize the auxiliary without sending write data to the auxiliary out of order. Therefore,
consistency is lost.

Note: MM/GM relationships with Consistency Protection enabled use a PiT copy
mechanism (FlashCopy) to keep a consistent copy of the auxiliary. The relationships stay
in a consistent state, although not synchronized, even if communication is lost. For more
information about Consistency Protection, see 10.6.20, “Consistency Protection for Global
Mirror and Metro Mirror” on page 611.

Detailed states
The following sections describe the states that are portrayed to the user for consistency
groups or relationships. They also describe information that is available in each state. The
major states are designed to provide guidance about the available configuration commands.

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent. This state is entered when the relationship or
consistency group was InconsistentCopying and suffered a persistent error or received a
stop command that caused the copy process to stop.

A start command causes the relationship or consistency group to move to the


InconsistentCopying state. A stop command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the auxiliary side makes the
transition to InconsistentDisconnected. The master side changes to IdlingDisconnected.

616 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. This state is entered after a
start command is issued to an InconsistentStopped relationship or a consistency group.

It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship


or consistency group. In this state, a background copy process runs that copies data from the
master to the auxiliary volume.

In the absence of errors, an InconsistentCopying relationship is active, and the copy


progress increases until the copy process completes. In certain error situations, the copy
progress might freeze or even regress.

A persistent error or stop command places the relationship or consistency group into an
InconsistentStopped state. A start command is accepted but has no effect.

If the background copy process completes on a stand-alone relationship or on all


relationships for a consistency group, the relationship or consistency group changes to the
ConsistentSynchronized state.

If the relationship or consistency group becomes disconnected, the auxiliary side changes to
InconsistentDisconnected. The master side changes to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master. This state can arise when a
relationship was in a ConsistentSynchronized state and experienced an error that forces a
Consistency Freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.

Normally, write activity that follows an I/O error causes updates to the master, and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or consistency group changes to
InconsistentCopying. Enter this command only after all outstanding events are repaired.

In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.

If the relationship or consistency group becomes disconnected, the auxiliary changes to


ConsistentDisconnected. The master changes to IdlingDisconnected.

An informational status log is generated whenever a relationship or consistency group enters


the ConsistentStopped state with a status of Online. You can configure this event to generate
an SNMP trap that can be used to trigger automation or manual intervention to run a start
command after a loss of synchronization.

ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are
sent to the master volume are also sent to the auxiliary volume. Either successful completion
must be received for both writes, the write must be failed to the host, or a state must change
out of the ConsistentSynchronized state before a write is completed to the host.

Chapter 10. Advanced Copy Services 617


A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.

A switch command leaves the relationship in the ConsistentSynchronized state, but it


reverses the master and auxiliary roles (it switches the direction of replicating data). A start
command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the same changes are made
as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.

In this state, the relationship or consistency group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine what areas must be copied following a start command.

The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.

Following a start command, the relationship or consistency group changes to


ConsistentSynchronized if there is no loss of consistency or to InconsistentCopying if there
is a loss of consistency.

Also, the relationship or consistency group accepts a -clean option on the start command
while in this state. If the relationship or consistency group becomes disconnected, both sides
change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the master role and accept read or write I/O.

The priority in this state is to recover the link to restore the relationship or consistency.

No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship changes to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or consistency group, which depends on the following factors:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, the relationship becomes Idling when it is


reconnected.

While IdlingDisconnected, if a write I/O is received that causes loss of synchronization


(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.

618 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the auxiliary role and do not accept read or write
I/O. Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.

When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are
true:
򐂰 The relationship was InconsistentStopped when it became disconnected.
򐂰 The user ran a stop command while disconnected.

In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or consistency group are all in the auxiliary role, and accept read I/O but not write
I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary


side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime, which
is the point when consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the
other system.

A stop command with the -access flag set to true transitions the relationship or consistency
group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume, and it is used as part of a DR scenario.

When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the master while disconnected.

Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.

Empty
This state applies only to consistency groups. It is the state of a consistency group that has
no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group, at which point the state of the relationship becomes the state
of the consistency group.

Chapter 10. Advanced Copy Services 619


10.7 Remote Copy commands
This section presents commands that must be run to create and operate RC services.

10.7.1 Remote Copy process


The MM/GM process includes the following steps:
1. A system partnership is created between two IBM Spectrum Virtualize systems (for
intercluster MM/GM).
2. A MM/GM relationship is created between two volumes of the same size.
3. To manage multiple MM/GM relationships as one entity, the relationships can be made
part of a MM/GM consistency group to ensure data consistency across multiple MM/GM
relationships or for ease of management.
4. The MM/GM relationship is started. When the background copy completes, the
relationship is consistent and synchronized. When synchronized, the auxiliary volume
holds a copy of the production data at the master that can be used for DR.
5. To access the auxiliary volume, the MM/GM relationship must be stopped with the access
option enabled before write I/O is submitted to the auxiliary.

Following these steps, the remote host server is mapped to the auxiliary volume and the disk
is available for I/O.

For more information about MM/GM commands, see IBM Knowledge Center and expand
Command-line interface → Copy Service commands.

The command set for MM/GM contains the following broad groups:
򐂰 Commands to create, delete, and manipulate relationships and consistency groups
򐂰 Commands to cause state changes

If a configuration command affects more than one system, MM/GM coordinates configuration
activity between the systems. Certain configuration commands can be performed only when
the systems are connected, and fail with no effect when they are disconnected.

Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.

For any command (with one exception), a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command. In this case, the
system that is receiving the command is called the local system.

The exception is a command that sets systems into a MM/GM partnership. The
mkfcpartnership and mkippartnership commands must be run on both the local and remote
systems.

The commands in this section are described as an abstract command set, and are
implemented by either of the following methods:
򐂰 CLI can be used for scripting and automation.
򐂰 GUI can be used for one-off tasks.

620 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.7.2 Listing available system partners
Use the lspartnershipcandidate command to list the systems that are available for setting
up a two-system partnership. This command is a prerequisite for creating MM/GM
relationships.

Note: This command is not supported on IP partnerships. Use mkippartnership for


IP connections.

10.7.3 Changing the system parameters


When you want to change system parameters specific to any RC or GM only, use the
chsystem command. The chsystem command features the following parameters for MM/GM:
򐂰 -relationshipbandwidthlimit cluster_relationship_bandwidth_limit
This parameter controls the maximum rate at which any one RC relationship can
synchronize. The default value for the relationship bandwidth limit is 25 MBps per volume,
but this value can now be specified as 1 - 100,000 MBps.
The partnership overall limit is controlled at a partnership level by the chpartnership
-linkbandwidthmbits command, and must be set on each involved system.

Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites regardless of the compression rates that you might
achieve.

򐂰 -gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values of 60 - 86,400 seconds in increments of
10 seconds. The default value is 300. Do not change this value except under the direction
of IBM Support.
򐂰 -gmmaxhostdelay max_host_delay
This parameter specifies the maximum time delay, in milliseconds (ms), at which the GM
link tolerance timer starts counting down. This threshold value determines the additional
effect that GM operations can add to the response times of the GM source volumes. You
can use this parameter to increase the threshold from the default value of 5 ms.
򐂰 -maxreplicationdelay max_replication_delay
This parameter sets a maximum replication delay in seconds. The value must be a
number 0 - 360 (0 being the default value, which means no delay). This feature sets the
maximum number of seconds to be tolerated to complete a single I/O. If I/O cannot
complete within the max_replication_delay, the 1920 event is reported. This setting is
system-wide, and applies to MM/GM relationships.

Run the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300

You can view all of these parameter values by running the lssystem <system_name>
command.

Chapter 10. Advanced Copy Services 621


Focus on the gmlinktolerance parameter. If poor response extends past the specified
tolerance, a 1920 event is logged and one or more GM relationships automatically stop to
protect the application hosts at the primary site. During normal operations, application hosts
experience a minimal effect from the response times because the GM feature uses
asynchronous replication.

However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations queue at the primary system. This queue results in an
extended response time to application hosts. In this situation, the gmlinktolerance feature
stops GM relationships, and the application host’s response time returns to normal.

After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state. Fix the cause of the event and restart your GM relationships.
For this reason, ensure that you monitor the system to track when these 1920 events occur.

You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
򐂰 During SAN maintenance windows in which degraded performance is expected from SAN
components and application hosts can stand extended response times from GM volumes.
򐂰 During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.

A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).

If 1920 events are occurring, you might need to use a performance monitoring and analysis
tool, such as the IBM Spectrum Control, to help identify and resolve the problem.

10.7.4 System partnership


To create a partnership, use one of the following commands, depending on the connection
type:
򐂰 Run the mkfcpartnership command to establish a one-way MM/GM partnership between
the local system and a remote system that are linked over an FC (or FCoE) connection.
򐂰 Run the mkippartnership command to establish a one-way MM/GM partnership between
the local system and a remote system that are linked over an IP connection.

To establish a fully functional MM/GM partnership, you must run the command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the IBM Spectrum Virtualize systems.

622 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When creating the partnership, you must specify the Link Bandwidth and can specify the
Background Copy Rate:
򐂰 The Link Bandwidth, expressed in Mbps, is the amount of bandwidth that can be used for
the FC or IP connection between the systems within the partnership.
򐂰 The Background Copy Rate is the maximum percentage of the Link Bandwidth that can be
used for background copy operations. The default value is 50%.

Background copy bandwidth effect on foreground I/O latency


The combination of the Link Bandwidth value and the Background Copy Rate percentage is
referred to as the background copy bandwidth. It must be at least 8 Mbps. For example, if the
Link Bandwidth is set to 10000 and the Background Copy Rate is set to 20, then the resulting
background copy bandwidth that is used for background operations is 200 Mbps.

The background copy bandwidth determines the rate at which the background copy is
attempted for MM/GM. The background copy bandwidth can affect foreground I/O latency in
one of the following ways:
򐂰 The following results can occur if the background copy bandwidth is set too high compared
to the MM/GM intercluster link capacity:
– The background copy I/Os can back up on the MM/GM intercluster link.
– There is a delay in the synchronous auxiliary writes of foreground I/Os.
– The foreground I/O latency increases as perceived by applications.
򐂰 If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
򐂰 If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary site overload the auxiliary storage and delay the
synchronous secondary writes of foreground I/Os.

To set the background copy bandwidth optimally, ensure that you consider all three resources:
primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
foreground I/O workload.

Perform this provisioning by calculation or by determining experimentally how much


background copy can be allowed before the foreground I/O latency becomes unacceptable.
Then, reduce the background copy to accommodate peaks in workload.

The chpartnership command


To change the bandwidth that is available for background copy in the system partnership, run
the chpartnership -backgroundcopyrate <percentage_of_link_bandwidth> command to
specify the percentage of whole link capacity to be used by the background copy process.

10.7.5 Creating a Metro Mirror/Global Mirror consistency group


Run the mkrcconsistgrp command to create an empty MM/GM consistency group.

The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems that own this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.

The new consistency group does not contain any relationships and is in the Empty state. You
can add MM/GM relationships to the group (upon creation or afterward) by running the
chrelationship command.

Chapter 10. Advanced Copy Services 623


10.7.6 Creating a Metro Mirror/Global Mirror relationship
Run the mkrcrelationship command to create a MM/GM relationship. This relationship
persists until it is deleted.

Optional parameter: If you do not use the -global optional parameter, an MM relationship
is created rather than a GM relationship.

The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O group. The master and
auxiliary volume cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.

When the MM/GM relationship is created, you can add it to an existing consistency group, or
it can be a stand-alone MM/GM relationship.

The lsrcrelationshipcandidate command


Run the lsrcrelationshipcandidate command to list the volumes that are eligible to form an
MM/GM relationship.

When the command runs, you can specify the master volume name and auxiliary system to
list the candidates that comply with the prerequisites to create a MM/GM relationship. If the
command runs with no parameters, all the volumes that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.

10.7.7 Changing a Metro Mirror/Global Mirror relationship


Run the chrcrelationship command to modify the following properties of an MM/GM
relationship:
򐂰 Change the name of an MM/GM relationship.
򐂰 Add a relationship to a group.
򐂰 Remove a relationship from a group by using the -force flag.

Adding an MM/GM relationship: When an MM/GM relationship is added to a consistency


group that is not empty, the relationship must have the same state and copy direction as
the group to be added to it.

10.7.8 Changing a Metro Mirror/Global Mirror consistency group


Run the chrcconsistgrp command to change the name of an MM/GM consistency group.

10.7.9 Starting a Metro Mirror/Global Mirror relationship


Run the startrcrelationship command to start the copy process of an MM/GM relationship.

When the command runs, you can set the copy direction if it is undefined. Optionally, you can
mark the auxiliary volume of the relationship as clean. The command fails if it is used as an
attempt to start a relationship that is already a part of a consistency group.

624 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can run this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.

If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship.

The use of the -force parameter here is a reminder that the data on the auxiliary becomes
inconsistent while resynchronization (background copying) takes place. Therefore, this data is
unusable for DR purposes before the background copy completes.

In the Idling state, you must specify the master volume to indicate the copy direction. In
other connected states, you can provide the -primary argument, but it must match the
existing setting.

10.7.10 Stopping a Metro Mirror/Global Mirror relationship


Run the stoprcrelationship command to stop the copy process for a relationship. You can
also run this command to enable write access to a consistent auxiliary volume by specifying
the -access parameter.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a consistency group. You can run this command to stop a
relationship that is copying from master to auxiliary.

If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you run a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcrelationship command to enable write access to the auxiliary
volume.

10.7.11 Starting a Metro Mirror/Global Mirror consistency group


Run the startrcconsistgrp command to start an MM/GM consistency group. You can run
this command only to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.

10.7.12 Stopping a Metro Mirror/Global Mirror consistency group


Run the stoprcconsistgrp command to stop the copy process for an MM/GM consistency
group. You can also use this command to enable write access to the auxiliary volumes in the
group if the group is in a consistent state.

Chapter 10. Advanced Copy Services 625


If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you run the startrcconsistgrp command. Write activity is no longer copied from
the master to the auxiliary volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.

When a consistency group is in a consistent state (for example, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcconsistgrp command to enable write access to the auxiliary
volumes within that group.

10.7.13 Deleting a Metro Mirror/Global Mirror relationship


Run the rmrcrelationship command to delete the relationship that is specified. Deleting a
relationship deletes only the logical relationship between the two volumes. It does not affect
the volumes themselves.

If the relationship is disconnected at the time that the command runs, the relationship is
deleted only on the system on which the command is run. When the systems reconnect, the
relationship is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can run the rmrcrelationship command independently on both systems.

A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.

If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.

10.7.14 Deleting a Metro Mirror/Global Mirror consistency group


Run the rmrcconsistgrp command to delete an MM/GM consistency group. This command
deletes the specified consistency group.

If the consistency group is disconnected at the time that the command runs, the consistency
group is deleted only on the system on which the command is being run. When the systems
reconnect, the consistency group is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can run the rmrcconsistgrp command separately on both
systems.

If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.

10.7.15 Reversing a Metro Mirror/Global Mirror relationship


Run the switchrcrelationship command to reverse the roles of the master volume and the
auxiliary volume when a stand-alone relationship is in a consistent state. When the command
runs, the master must be specified.

626 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
10.7.16 Reversing a Metro Mirror/Global Mirror consistency group
Run the switchrcconsistgrp command to reverse the roles of the master volume and the
auxiliary volume when a consistency group is in a consistent state. This change is applied to
all the relationships in the consistency group. When the command runs, the master must be
specified.

Important: By reversing the roles, your current source volumes become targets, and target
volumes become sources. Therefore, you lose write access to your current primary
volumes.

10.8 Native IP replication


IBM Spectrum Virtualize can implement RC services by using FC connections or IP
connections. This section describes the IBM Spectrum Virtualize IP replication technology
and implementation.

Demonstration: The IBM Client Demonstration Center shows how data is replicated by
using GMCV (cycling mode set to multiple). This configuration fits the new IP replication
function because it is designed for links with high latency, low bandwidth, or both.

10.8.1 Native IP replication technology


Remote mirroring over IP communication is supported on the IBM Spectrum Virtualize based
systems by using Ethernet communication links. IBM Spectrum Virtualize Software IP
replication uses the innovative Bridgeworks SANSlide technology to optimize network
bandwidth and utilization. This function enables the use of a lower-speed and lower-cost
networking infrastructure for data replication.

Bridgeworks SANSlide technology, which is integrated into the IBM Spectrum Virtualize
Software, uses artificial intelligence (AI) to help optimize network bandwidth use and adapt to
changing workload and network conditions.

This technology can improve remote mirroring network bandwidth usage up to three times.
Improved bandwidth usage can enable clients to deploy a less costly network infrastructure or
speed remote replication cycles to enhance DR effectiveness.

Chapter 10. Advanced Copy Services 627


With an Ethernet network data flow, the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that is sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 10-94.

Figure 10-94 Typical Ethernet network data flow

However, by using the embedded IP replication, this behavior can be eliminated with the
enhanced parallelism of the data flow by using multiple virtual connections (VCs) that share
IP links and addresses. The AI engine can dynamically adjust the number of VCs, receive
window size, and packet size to maintain optimum performance. While the engine is waiting
for one VC’s ACK, it sends more packets across other VCs. If packets are lost from any VC,
data is automatically retransmitted, as shown in Figure 10-95.

Figure 10-95 Optimized network data flow by using Bridgeworks SANSlide technology

For more information about this technology, see IBM Storwize V7000 and SANSlide
Implementation, REDP-5023.

With native IP partnership, the following Copy Services features are supported:
򐂰 MM
Referred to as synchronous replication, MM provides a consistent copy of a source
volume on a target volume. Data is written to the target volume synchronously after it is
written to the source volume so that the copy is continuously updated.

628 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 GM and GMCV
Referred to as asynchronous replication, GM provides a consistent copy of a source
volume on a target volume. Data is written to the target volume asynchronously so that the
copy is continuously updated. However, the copy might not contain the last few updates if
a DR operation is performed. An added extension to GM is GMCV. GMCV is the preferred
method for use with native IP replication.

Note: For IP partnerships, generally use the GMCV method of copying (asynchronous
copy of changed grains only). This method can have performance benefits. Also, GM
and MM might be more susceptible to the loss of synchronization.

10.8.2 IP partnership limitations


The following prerequisites and assumptions must be considered before you can establish an
IP partnership between two IBM Spectrum Virtualize systems:
򐂰 The IBM Spectrum Virtualize systems have the code level V7.2 or later.
򐂰 The systems must have the necessary licenses that enable RC partnerships to be
configured between two systems. No separate license is required to enable IP
partnership.
򐂰 The storage SANs are configured correctly and the correct infrastructure to support the
IBM Spectrum Virtualize systems in RC partnerships over IP links is in place.
򐂰 The two systems must be able to ping each other and perform the discovery.
򐂰 TCP ports 3260 and 3265 are used by systems for IP partnership communications.
Therefore, these ports must be open.
򐂰 The maximum number of partnerships between the local and remote systems, including
both IP and FC partnerships, is limited to the current maximum that is supported, which is
three partnerships (four systems total).
򐂰 Only a single partnership over IP is supported.
򐂰 A system can have simultaneous partnerships over FC and IP but with separate systems.
The FC zones between two systems must be removed before an IP partnership is
configured.
򐂰 IP partnerships are supported on both 10 gigabits per second (Gbps) links and 1 Gbps
links. However, the intermix of both on a single link is not supported.
򐂰 The maximum supported round-trip time (RTT) is 80 ms for 1 Gbps links.
򐂰 The maximum supported RTT is 10 ms for 10 Gbps links.
򐂰 The inter-cluster heartbeat traffic uses 1 Mbps per link.
򐂰 Only nodes from two I/O groups can have ports that are configured for an IP partnership.
򐂰 Migrations of RC relationships directly from FC-based partnerships to IP partnerships are
not supported.
򐂰 IP partnerships between the two systems can be over IPv4 or IPv6 only, but not both.
򐂰 Virtual local area network (VLAN) tagging of the IP addresses that are configured for RC is
supported starting with V7.4.
򐂰 Management IP and internet Small Computer Systems Interface (iSCSI) IP on the same
port can be in a different network starting with Version 7.4.
򐂰 An added layer of security is provided by using Challenge Handshake Authentication
Protocol (CHAP) authentication.

Chapter 10. Advanced Copy Services 629


򐂰 TCP ports 3260 and 3265 are used for IP partnership communications. Therefore, these
ports must be open in firewalls between the systems.
򐂰 Only a single RC data session per physical link can be established. It is intended that only
one connection (for sending/receiving RC data) is made for each independent physical link
between the systems.

Note: A physical link is the physical IP link between the two sites: A (local) and B
(remote). Multiple IP addresses on local system A could be connected (by Ethernet
switches) to this physical link. Similarly, multiple IP addresses on remote system B
could be connected (by Ethernet switches) to the same physical link. At any PiT, only a
single IP address on cluster A can form an RC data session with an IP address on
cluster B.

򐂰 The maximum throughput is restricted based on the use of 1 Gbps, 10 Gbps, or 25 Gbps
Ethernet ports. It varies based on distance (for example, round-trip latency) and quality of
the communication link (for example, packet loss):
– One 1 Gbps port can transfer up to 110 MBps unidirectional, 190 MBps bidirectional.
– Two 1 Gbps ports can transfer up to 220 MBps unidirectional, 325 MBps bidirectional.
– One 10 Gbps port can transfer up to 240 MBps unidirectional, 350 MBps bidirectional.
– Two 10 Gbps port can transfer up to 440 MBps unidirectional, 600 MBps bidirectional.

Note: IP replication is supported by 25 Gbps Melanox and Chelsio adapters, but there
is no performance benefit or advantage for IP replication with these adapters. However,
for consolidation where these cards are used for other purposes like iSCSI Extensions
for Remote Direct Memory Access (RDMA) (iSER) Host Attach or iSCSI Host
Attach/Backend Virtualization, they can be used for IP replication.

The minimum supported link bandwidth is 10 Mbps. However, this requirement scales up
with the amount of host I/O. Figure 10-96 shows the scaling of host I/O.

Figure 10-96 Scaling of host I/O

The equation that can describe the approximate minimum bandwidth that is required
between two systems with < 5 ms RTT and errorless link follows:
Minimum intersite link bandwidth in Mbps > Required Background Copy in Mbps +
Maximum Host I/O in Mbps + 1 Mbps heartbeat traffic
Increasing latency and errors results in a higher requirement for minimum bandwidth.

630 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: The Bandwidth setting definition when the IP partnerships are created changed
in Version 7.7. Previously, the bandwidth setting defaulted to 50 MiB, and was the
maximum transfer rate from the primary site to the secondary site for initial
sync/resyncs of volumes.

The Link Bandwidth setting is now configured by using megabits (Mb) not megabytes
(MB). You set the Link Bandwidth setting to a value that the communication link can
sustain, or to what is allocated for replication. The Background Copy Rate setting is now
a percentage of the Link Bandwidth. The Background Copy Rate setting determines the
available bandwidth for the initial sync and resyncs or for GMCV.

10.8.3 IP partnership and data compression


When creating an IP partnership between two systems, you can specify whether you want to
use data compression. When enabled, IP partnership compression compresses the data that
is sent from a local system to the remote system and potentially uses less bandwidth than
with uncompressed data. It is also used to decompress data that is received by a local system
from a remote system.

Data compression is supported for IPv4 or IPv6 partnerships. To enable data compression,
both systems in an IP partnership must be running a software level that supports IP
partnership compression (Version 7.7 or later).

To fully enable compression in an IP partnership, each system must enable compression.


When compression is enabled on the local system, data that is sent to the remote system is
compressed, so it must be decompressed on the remote system and vice versa.

Although IP compression uses the same real-time compression (RtC) algorithm as for
volumes, there is no need for an IBM Real-Time Compresson Appliance license on any of the
local or remote systems.

Replicated volumes that use IP partnership compression can be compressed or


uncompressed on the system because there is no link between volumes compression and IP
replication compression:
1. Read operation decompresses the data.
2. Decompressed data is transferred to the RC code.
3. Data is compressed before being sent over the IP link.
4. Remote system RC code decompresses the received data.
5. Write operation on a volume compresses the data.

10.8.4 VLAN support


Starting with Version 7.4, VLAN tagging is supported for both iSCSI host attachment and IP
replication. Hosts and remote copy operations can connect to the system through Ethernet
ports. Each traffic type has different bandwidth requirements, which can interfere with each
other if they share IP connections. VLAN tagging creates two separate connections on the
same IP network for different types of traffic. The system supports VLAN configuration on
both IPv4 and IPv6 connections.

When the VLAN ID is configured for IP addresses that are used for either iSCSI host
attachment or IP replication, the VLAN settings on the Ethernet network and servers must be
configured correctly to avoid connectivity issues. After the VLANs are configured, changes to
the VLAN settings disrupt iSCSI and IP replication traffic to and from the partnerships.

Chapter 10. Advanced Copy Services 631


During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O group can differ. To avoid any service disruption,
switches must be configured so that the failover VLANs are configured on the local switch
ports and the failover of IP addresses from a failing node to a surviving node succeeds. If
failover VLANs are not configured on the local switch ports, there are no paths to the
IBM Spectrum Virtualize system nodes during a node failure and the replication fails.

Consider the following requirements and procedures when implementing VLAN tagging:
򐂰 VLAN tagging is supported for IP partnership traffic between two systems.
򐂰 VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
򐂰 VLAN tagging by default is disabled for any IP address of a node port (N_Port). You can
use the CLI or GUI to optionally set the VLAN ID for port IPs on both systems in the IP
partnership.
򐂰 When a VLAN ID is configured for the port IP addresses that are used in RC port groups,
appropriate VLAN settings on the Ethernet network must also be configured to prevent
connectivity issues.

Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop the
partnership first before you configure VLAN tags. Restart the partnership after the
configuration is complete.

10.8.5 IP partnership and terminology


The IP partnership terminology and abbreviations that are used are listed in Table 10-13.

Table 10-13 Terminology for IP partnership


IP partnership terminology Description

RC group or RC port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same RC group can form RC
connections with the partner system:
򐂰 0: Ports that are not configured for RC.
򐂰 1: Ports that belong to the RC port group 1.
򐂰 2: Ports that belong to the RC port group 2.
Each IP address can be shared for iSCSI host attachment and
RC functions Therefore, appropriate settings must be applied to
each IP address.

IP partnership Two systems that are partnered to perform RC over native IP


links.

FC partnership Two systems that are partnered to perform RC over native FC


links.

Failover Failure of a node within an I/O group causes the volume access
to go through the surviving node. The IP addresses fail over to
the surviving node in the I/O group. When the configuration
node of the system fails, management IPs also fail over to an
alternative node.

Failback When the failed node rejoins the system, all failed over IP
addresses are failed back from the surviving node to the
rejoined node, and volume access is restored through
this node.

632 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
IP partnership terminology Description

linkbandwidthmbits Aggregate bandwidth of all physical links between two sites


in Mbps.

IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.

Discovery Process by which two IBM Spectrum Virtualize systems


exchange information about their IP address configuration. For
IP-based partnerships, only IP addresses that are configured
for RC are discovered.
For example, the first discovery takes place when the user is
running the mkippartnership command. Subsequent
discoveries can take place as a result of user activities
(configuration changes) or as a result of hardware failures (for
example, node failure, ports failure, and other failures).

10.8.6 States of IP partnership


The different partnership states in IP partnership are listed in Table 10-14.

Table 10-14 States of IP partnership


State Systems Support for Comments
connected active Remote
Copy I/O

Partially_Configured_Local No No This state indicates that the initial


discovery is complete.

Fully_Configured Yes Yes The discovery successfully completed


between two systems, and the two
systems can establish RC relationships.

Fully_Configured_Stopped Yes Yes The partnership is stopped on the system.

Fully_Configured_Remote_Stopped Yes No The partnership is stopped on the remote


system.

Not_Present Yes No The two systems cannot communicate


with each other. This state is also seen
when data paths between the two systems
are not established.

Fully_Configured_Exceeded Yes No There are too many systems in the


network, and the partnership from the local
system to remote system is disabled.

Fully_Configured_Excluded No No The connection is excluded because of too


many problems or either system cannot
support the I/O work load for the MM and
GM relationships.

To establish two systems in the IP partnerships, complete the following steps:


1. The administrator configures the CHAP secret on both systems. This step is not
mandatory, and users can choose to not configure the CHAP secret.
2. The administrator configures the system IP addresses on both local and remote systems
so that they can discover each other over the network.

Chapter 10. Advanced Copy Services 633


3. If you want to use VLANs, configure your local area network (LAN) switches and Ethernet
ports to use VLAN tagging.
4. The administrator configures the systems ports on each node in both systems by using
the GUI (or the cfgportip command), and completes the following steps:
a. Configure the IP addresses for RC data.
b. Add the IP addresses in the respective RC port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system, where the partnership state then changes to Partially_Configured_Local.
6. The administrator establishes the partnership from the remote system with the local
system. If this process is successful, the partnership state then changes to
Fully_Configured, which implies that the partnerships over the IP network were
successfully established. The partnership state momentarily remains Not_Present before
moving to the Fully_Configured state.
7. The administrator creates MM, GM, and GMVC relationships.

Partnership consideration: When the partnership is created, no master or auxiliary


status is defined or implied. The partnership is equal. The concepts of master or auxiliary
and primary or secondary apply to volume relationships only, not to system partnerships.

10.8.7 Remote Copy groups


This section describes RC groups (or RC port groups) and different ways to configure the
links between the two remote systems. The two IBM Spectrum Virtualize systems can be
connected to each other over one link or at most, two links. To address the requirement to
enable the systems to know about the physical links between the two sites, the concept of RC
port groups was introduced.

RC port group ID is a numerical tag that is associated with an IP port of an IBM Spectrum
Virtualize system to indicate which physical IP link it is connected to. Multiple nodes might be
connected to the same physical long-distance link, and must therefore share RC port group
ID.

In scenarios with two physical links between the local and remote clusters, two RC port group
IDs must be used to designate which IP addresses are connected to which physical link. This
configuration must be done by the system administrator by using the GUI or the cfgportip
command.

Remember: IP ports on both partners must be configured with identical RC port group IDs
for the partnership to be established correctly.

The IBM Spectrum Virtualize system IP addresses that are connected to the same physical
link are designated with identical RC port groups. The system supports three RC groups: 0, 1,
and 2.

The systems’ IP addresses are, by default, in RC port group 0. Ports in port group 0 are not
considered for creating RC data paths between two systems. For partnerships to be
established over IP links directly, IP ports must be configured in RC group 1 if a single
inter-site link exists, or in RC groups 1 and 2 if two inter-site links exist.

634 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can assign one IPv4 address and one IPv6 address to each Ethernet port on the system
platforms. Each of these IP addresses can be shared between iSCSI host attachment and the
IP partnership. The user must configure the required IP address (IPv4 or IPv6) on an
Ethernet port with an RC port group.

The administrator might want to use IPv6 addresses for RC operations and use IPv4
addresses on that same port for iSCSI host attach. This configuration also implies that for two
systems to establish an IP partnership, both systems must have IPv6 addresses that are
configured.

Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.

Note: To establish an IP partnership, each IBM Spectrum Virtualize based system node
must have only a single RC port group that is configured, 1 or 2. The remaining IP
addresses must be in RC port group 0.

10.8.8 Supported configurations

Note: For explanation purposes, this section shows a node with two ports that are
available: 1 and 2.

The following supported configurations for IP partnership that were in the first release are
described in this section:
򐂰 Two 2-node systems in IP partnership over a single inter-site link, as shown in
Figure 10-97 (configuration 1).

Figure 10-97 Single link with only one Remote Copy port group that is configured in each system

Chapter 10. Advanced Copy Services 635


As shown in Figure 10-97 on page 635, there are two systems:
– System A
– System B
A single RC port group 1 is created on Node A1 on System A and on Node B2 on System
B because there is only a single inter-site link to facilitate the IP partnership traffic. An
administrator might choose to configure the RC port group on Node B1 on System B
rather than Node B2.
At any time, only the IP addresses that are configured in RC port group 1 on the nodes in
System A and System B participate in establishing data paths between the two systems
after the IP partnerships are created. In this configuration, no failover ports are configured
on the partner node in the same I/O group.
This configuration has the following characteristics:
– Only one node in each system has an RC port group that is configured, and no failover
ports are configured.
– If the Node A1 in System A or the Node B2 in System B encounter a failure, the IP
partnership stops and enters the Not_Present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state goes to the Fully_Configured state.
– If the inter-site system link fails, the IP partnerships change to the Not_Present state.
– This configuration is not recommended because it is not resilient to node failures.
򐂰 Two 2-node systems in IP partnership over a single inter-site link (with failover ports
configured), as shown in Figure 10-98 (configuration 2).

Figure 10-98 One RC group on each system and nodes with failover ports configured

As shown in Figure 10-98, there are two systems:


– System A
– System B

636 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A single RC port group 1 is configured on two Ethernet ports, one each on Node A1 and
Node A2 on System A. Similarly, a single RC port group is configured on two Ethernet
ports on Node B1 and Node B2 on System B.
Although two ports on each system are configured for RC port group 1, only one Ethernet
port in each system actively participates in the IP partnership process. This selection is
determined by a path configuration algorithm that is designed to choose data paths
between the two systems to optimize performance.
The other port on the partner node in the I/O group behaves as a standby port that is used
if there is a node failure. If Node A1 fails in System A, the IP partnership continues
servicing replication I/O from Ethernet Port 2 because a failover port is configured on
Node A2 on Ethernet Port 2.
However, it might take some time for the discovery and path configuration logic to
reestablish paths post-failover. This delay can cause partnerships to change to
Not_Present for that time. The details of the particular IP port that is actively participating
in IP partnership is provided in the lsportip output (reported as used).
This configuration has the following characteristics:
– Each node in the I/O group has the same RC port group that is configured. However,
only one port in that RC port group is active at any time at each system.
– If the Node A1 in System A or the Node B2 in System B fails in the respective systems,
IP partnerships rediscovery is triggered and continues servicing the I/O from the
failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
where the partnerships momentarily change to the Not_Present state and recover.

Chapter 10. Advanced Copy Services 637


򐂰 Two 4-node systems in IP partnership over a single inter-site link (with failover ports
configured), as shown in Figure 10-99 (configuration 3).

Figure 10-99 Multinode systems single inter-site link with only one Remote Copy port group

As shown in Figure 10-99, there are two 4-node systems:


– System A
– System B
A single RC port group 1 is configured on nodes A1, A2, A3, and A4 on System A, Site A;
and on nodes B1, B2, B3, and B4 on System B, Site B. Although four ports are configured
for RC group 1, only one Ethernet port in each RC port group on each system actively
participates in the IP partnership process.
Port selection is determined by a path configuration algorithm. The other ports play the
role of standby ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with RC port group 1 from any of the nodes from either of the two I/O groups in
System A. However, it might take some time (generally seconds) for discovery and path
configuration logic to reestablish paths post-failover. This process can cause partnerships
to change to the Not_Present state.
This result causes RC relationships to stop. The administrator might need to manually
verify the issues in the event log and start the relationships or Remote Copy consistency
groups if they do not autorecover. The details of the particular IP port actively participating
in the IP partnership process is provided in the lsportip output (reported as used).

638 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
This configuration has the following characteristics:
– Each node has the RC port group that is configured in both I/O groups. However, only
one port in that RC port group remains active and participates in IP partnership on
each system.
– If the Node A1 in System A or the Node B2 in System B encounter some failure in the
system, IP partnerships discovery is triggered and continues servicing the I/O from the
failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
where the partnerships momentarily change to the Not_Present state and then
recover.
– The bandwidth of the single link is used completely.
򐂰 Eight-node system in IP partnership with four-node system over single inter-site link, as
shown in Figure 10-100 (configuration 4).

Figure 10-100 Multinode systems single inter-site link with only one Remote Copy port group

Chapter 10. Advanced Copy Services 639


As shown in Figure 10-100 on page 639, there is an eight-node system (System A in Site
A) and a four-node system (System B in Site B). A single RC port group 1 is configured on
nodes A1, A2, A5, and A6 on System A at Site A. Similarly, a single RC port group 1 is
configured on nodes B1, B2, B3, and B4 on System B.
Although there are four I/O groups (eight nodes) in System A, any two I/O groups at
maximum are supported to be configured for IP partnerships. If Node A1 fails in System A,
the IP partnership continues by using one of the ports that is configured in RC port group
from any of the nodes from either of the two I/O groups in System A.
However, it might take some time for discovery and path configuration logic to reestablish
paths post-failover, and this delay might cause partnerships to change to the Not_Present
state.
This process can lead to RC relationships stopping, and the administrator must manually
start them if the relationships do not auto-recover. The details of which particular IP port is
actively participating in IP partnership process is provided in the lsportip output (reported
as used).
This configuration has the following characteristics:
– Each node has the RC port group that is configured in both the I/O groups that are
identified for participating in IP replication. However, only one port in that RC port
group remains active on each system and participates in IP replication.
– If the Node A1 in System A or the Node B2 in System B fails in the system, the IP
partnerships trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
where the partnerships momentarily change to the Not_Present state and then recover.
– The bandwidth of the single link is used completely.

640 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Two 2-node systems with two inter-site links, as shown in Figure 10-101 (configuration 5).

Figure 10-101 Dual links with two Remote Copy groups on each system configured

As shown in Figure 10-101, RC port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O group. Instead, the ports
are maintained in different RC port groups on both of the nodes. They remain active and
participate in the IP partnership by using both of the links.
However, if either of the nodes in the I/O group fail (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in RC port
group 2. Therefore, the effective bandwidth of the two links is reduced to 50% because
only the bandwidth of a single link is available until the failure is resolved.
This configuration has the following characteristics:
– Two inter-site links and two RC port groups are configured.
– Each node has only one IP port in RC port group 1 or 2.
– Both the IP ports in the two RC port groups participate simultaneously in IP
partnerships. Therefore, both links are used.
– During node failure or link failure, the IP partnership traffic continues from the other
available link and the port group. Therefore, if two links of 10 Mbps each are available
and you have 20 Mbps of effective link bandwidth, bandwidth is reduced to 10 Mbps
only during a failure.
– After the node failure or link failure is resolved and failback happens, the entire
bandwidth of both of the links is available as before.

Chapter 10. Advanced Copy Services 641


򐂰 Two 4-node systems in IP partnership with dual inter-site links, as shown in Figure 10-102
(configuration 6).

Figure 10-102 Multinode systems with dual inter-site links between the two systems

As shown in Figure 10-102, there are two 4-node systems:


– System A
– System B
This configuration is an extension of Configuration 5 to a multinode multi-I/O group
environment (Figure 10-101 on page 641). This configuration has two I/O groups, and
each node in the I/O group has a single port that is configured in RC port groups 1 or 2.
Although two ports are configured in RC port groups 1 and 2 on each system, only one IP
port in each RC port group on each system actively participates in the IP partnership. The
other ports that are configured in the same RC port group act as standby ports in the
event of failure. Which port in a configured RC port group participates in IP partnership at
any moment is determined by a path configuration algorithm.

642 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
In this configuration, if Node A1 fails in System A, IP partnership traffic continues from
Node A2 (that is, RC port group 2) and at the same time the failover also causes discovery
in RC port group 1. Therefore, the IP partnership traffic continues from Node A3 on which
RC port group 1 is configured. The details of the particular IP port that is actively
participating in IP partnership process is provided in the lsportip output (reported as
used).
This configuration has the following characteristics:
– Each node has the RC port group that is configured in the I/O groups 1 or 2. However,
only one port per system in both RC port groups remains active and participates in the
IP partnership.
– Only a single port per system from each configured RC port group participates
simultaneously in the IP partnership. Therefore, both links are used.
– During node failure or port failure of a node that is actively participating in the IP
partnership, the IP partnership continues from the alternative port because another
port is in the system in the same RC port group, but in a different I/O group.
– The pathing algorithm can start discovery of available ports in the affected RC port
group in the second I/O group and reestablish pathing, which restores the total
bandwidth, so both links are available to support the IP partnership.

Chapter 10. Advanced Copy Services 643


򐂰 Eight-node system in IP partnership with a four-node system over dual inter-site links, as
shown in Figure 10-103 (configuration 7).

Figure 10-103 Multinode systems (two I/O groups on each system) with dual inter-site links
between the two systems

As shown in Figure 10-103, there is an eight-node System A in Site A and a four-node


System B in Site B. Because a maximum of two I/O groups in the IP partnership is
supported in a system, although there are four I/O groups (eight nodes), nodes from only
two I/O groups are configured with RC port groups in System A. The remaining or all the
I/O groups can be configured to be RC partnerships over FC.
In this configuration, two links and two I/O groups are configured with RC port groups 1
and 2, but the path selection logic is managed by an internal algorithm. Therefore, this
configuration depends on the pathing algorithm to decide which of the nodes actively
participates in the IP partnership. Even if Node A5 and Node A6 are configured with RC
port groups properly, active IP partnership traffic on both links might be driven from Node
A1 and Node A2 only.

644 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
If Node A1 fails in System A, the IP partnership traffic continues from Node A2 (that is, RC
port group 2). The failover also causes IP partnership traffic to continue from Node A5, on
which RC port group 1 is configured. The details of the IP port actively participating in the
IP partnership process is provided in the lsportip output (reported as used).
This configuration has the following characteristics:
– There are two I/O groups with nodes in those I/O groups that are configured in two RC
port groups because there are two inter-site links for participating in IP partnership.
However, only one port per system in a particular RC port group remains active and
participates in IP partnership.
– One port per system from each RC port group participates in IP partnership
simultaneously. Therefore, both links are used.
– If a node or port on the node that is actively participating in the IP partnership fails, the
RC data path is established from that port because another port is available on an
alternative node in the system with the same RC port group.
– The path selection algorithm starts discovery of available ports in the affected RC port
group in the alternative I/O groups and the paths are reestablished, which restores the
total bandwidth across both links.
– The remaining or all the I/O groups can be in RC partnerships with other systems.
򐂰 An example of an unsupported configuration for a single inter-site link is shown in
Figure 10-104 (configuration 8).

Figure 10-104 Two-node systems with a single inter-site link and Remote Copy port groups
configured

As shown in Figure 10-104, this configuration is similar to Configuration 2 (Figure 10-98


on page 636), but differs because each node now has the same RC port group that is
configured on more than one IP port.
On any node, only one port at any time can participate in the IP partnership. Configuring
multiple ports in the same RC group on the same node is not supported.

Chapter 10. Advanced Copy Services 645


򐂰 An example of an unsupported configuration for a dual inter-site link is shown in
Figure 10-105 (configuration 9).

Figure 10-105 Dual links with two Remote Copy port groups with failover port groups configured

As shown in Figure 10-105, this configuration is similar to Configuration 5 (Figure 10-101


on page 641), but differs because each node now also has two ports that are configured
with RC port groups. In this configuration, the path selection algorithm can select a path
that might cause partnerships to change to the Not_Present state and then recover, which
results in a configuration restriction. Using this configuration is not recommended until the
configuration restriction is lifted in future releases.
򐂰 An example deployment for Configuration 2 with a dedicated inter-site link is shown in
Figure 10-106 (configuration 10).

Figure 10-106 Deployment example

646 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
In this configuration, one port on each node in System A and System B is configured in
RC group 1 to establish the IP partnership and support RC relationships. A dedicated
inter-site link is used for the IP partnership traffic, and iSCSI host attachment is disabled
on those ports.
The following configuration process is used:
a. Configure the system IP addresses so that they can be reached over the inter-site link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6, and then assign
IP addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for RC on both systems by using the following settings:
• RC group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set (default MTU is 1500 on IBM Spectrum Virtualize).
e. Establish IP partnerships from both of the systems.
f. After the partnerships are in the Fully_Configured state, you can create the RC
relationships.
򐂰 Figure 10-106 on page 646 is an example deployment for the configuration that is shown
in Figure 10-100 on page 639. Ports that are shared with host access are shown in
Figure 10-107 (configuration 11).

Figure 10-107 Deployment example

In this configuration, IP ports are shared by both iSCSI hosts and for the IP partnership.
The following configuration process is used:
a. Configure the system IP addresses so that they can be reached over the inter-site link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6, and then assign
IP addresses and open firewall ports 3260 and 3265.

Chapter 10. Advanced Copy Services 647


c. Configure IP ports for RC on System A1 by using the following settings:
• Node 1:
- Port 1, RC port group 1
- Host: Yes
- Assign IP address
• Node 2:
- Port 4, RC Port Group 2
- Host: Yes
- Assign IP address
d. Configure IP ports for RC on System B1 by using the following settings:
• Node 1:
- Port 1, RC port group 1
- Host: Yes
- Assign IP address
• Node 2:
- Port 4, RC port group 2
- Host: Yes
- Assign IP address
e. Check the MTU levels across the network as set (default MTU is 1500 on IBM
Spectrum Virtualize).
f. Establish IP partnerships from both systems.
g. After the partnerships are in the Fully_Configured state, you can create the RC
relationships.

10.9 Managing Remote Copy by using the GUI


It is often easier to control MM/GM with the GUI if you have few mappings. When many
mappings are used, run your commands by using the CLI. This section describes the tasks
that you can perform at an RC level by using a GUI.

Note: The Copy Services → Consistency Groups menu relates to FlashCopy


consistency groups only, not RC ones.

The following windows are used to visualize and manage your remote copies:
򐂰 The Remote Copy window
To open the Remote Copy window, select Copy Services → Remote Copy in the main
menu, as shown in Figure 10-108 on page 649.

648 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-108 Remote Copy menu

The Remote Copy window opens, as shown in Figure 10-109.

Figure 10-109 Remote Copy window

Chapter 10. Advanced Copy Services 649


򐂰 Partnerships window
To open the Partnership window, select Copy Services → Partnership in the main menu,
as shown in Figure 10-110.

Figure 10-110 Partnership menu

The Partnership window opens, as shown in Figure 10-111.

Figure 10-111 Partnership window

10.9.1 Creating a Fibre Channel partnership

Intra-cluster MM: If you are creating an intra-cluster MM, do not perform the steps in this
section to create the MM partnership. Instead, see 10.9.2, “Creating Remote Copy
relationships” on page 652.

To create an FC partnership between IBM Spectrum Virtualize systems by using the GUI,
open the Partnerships window and click Create Partnership to create a partnership, as
shown in Figure 10-112 on page 651.

650 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-112 Creating a partnership

In the Create Partnership window, enter the following information, as shown in Figure 10-113:
1. Select the partnership type (Fibre Channel or IP). If you choose IP, you must provide the
IP address of the partner system and the partner system’s CHAP key.
2. If your partnership is based on Fibre Channel Protocol (FCP), select an available partner
system from the menu. If no candidate is available, the This system does not have any
candidates error message is displayed.
3. Enter a link bandwidth in Mbps that is used by the background copy process between the
systems in the partnership.
4. Enter the background copy rate.
5. Click OK to confirm the partnership relationship.

Figure 10-113 Creating a partnership details

To fully configure the partnership between both systems, perform the same steps on the other
system in the partnership. If not configured on the partner system, the partnership is
displayed as Partially Configured.

Chapter 10. Advanced Copy Services 651


When both sides of the system partnership are defined, the partnership is displayed as Fully
Configured, as shown in Figure 10-114.

Figure 10-114 Fully configured Fibre Channel partnership

10.9.2 Creating Remote Copy relationships


This section shows how to create RC relationships for volumes with their respective remote
targets. Before creating a relationship between a volume on the local system and a volume on
a remote system, both volumes must exist and have the same virtual size.

To create an RC relationship, complete the following steps:


1. Select Copy Services → Remote Copy.
2. Right-click the consistency group that you want to create the relationship for and select
Create Relationship, as shown in Figure 10-115. If you want to create a stand-alone
relationship (not in a consistency group), right-click Not in a Group.

Figure 10-115 Creating Remote Copy relationships

652 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. In the Create Relationship window, select one of the following types of relationships that
you want to create, as shown in Figure 10-116, and click Next:
– Metro Mirror
– Global Mirror (with or without Consistency Protection)
– Global Mirror with Change Volumes

Figure 10-116 Creating a Remote Copy relationship

Chapter 10. Advanced Copy Services 653


4. In the next window, select the master and auxiliary volumes, as shown in Figure 10-117,
and click Add.

Figure 10-117 Selecting the master and auxiliary volumes

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list for a specific source volume.

654 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. In the next window, you can choose to add change volumes, as shown in Figure 10-118.
Click Finish.

Figure 10-118 Add Change Volumes

Chapter 10. Advanced Copy Services 655


6. In the next window, check the relationship that was created. If you want to add more
relationships into the same consistency group, add the new relationships, as shown in
Figure 10-119. Click Next.

Figure 10-119 Checking and adding the relationship

656 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7. In the next window, select whether the volumes are synchronized so that the relationship
is created, as shown in Figure 10-120. Click Finish.

Figure 10-120 Selecting whether the volumes are synchronized

Note: If the volumes are not synchronized, the initial copy copies the entire source volume
to the remote target volume. If you suspect volumes are different or if you have doubts,
synchronize them to ensure consistency on both sides of the relationship.

10.9.3 Creating a consistency group


To create a consistency group, complete the following steps:
1. Select Copy Services → Remote Copy and click Create Consistency Group, as shown
in Figure 10-121.

Figure 10-121 Creating a Remote Copy consistency group

Chapter 10. Advanced Copy Services 657


2. Enter a name for the consistency group and click Next, as shown in Figure 10-122.

Figure 10-122 Entering a name for the new consistency group

658 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. In the next window, select the location of the auxiliary volumes in the consistency group,
as shown in Figure 10-123, and click Next:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the menu.

Figure 10-123 Selecting the system to create the consistency group

Chapter 10. Advanced Copy Services 659


4. Select whether you want to add relationships to this group, as shown in Figure 10-124.
The following options are available:
– If you select No, click Finish to create an empty consistency group that can be used
later.
– If you select Yes, click Next to continue the wizard and continue with the next steps.

Figure 10-124 Selecting whether relationships should be added to the new consistency group

660 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. Select one of the following types of relationships that you want to create or add, as shown
in Figure 10-125, and click Next:
– Metro Mirror.
– Global Mirror (with or without Consistency Protection).
– Global Mirror with Change Volumes.

Figure 10-125 Selecting the type of Remote Copy relationships to create/add

Chapter 10. Advanced Copy Services 661


6. As shown in Figure 10-126, you can optionally select existing relationships to add to the
group. Click Next.

Figure 10-126 Adding Remote Copy relationships to the new consistency group

Note: Only relationships of the type that was previously selected are listed.

662 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7. In the next window, you can create relationships between master volumes and auxiliary
volumes to be added to the consistency group that is being created, as shown in
Figure 10-127. Click Add when both volumes are selected. You can add multiple
relationships in this step by repeating the selection.
When all the relationships you need are created, click Next.

Figure 10-127 Creating relationships for the new consistency group

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list for a specific source volume.

Chapter 10. Advanced Copy Services 663


8. Specify whether the volumes in the consistency group are synchronized, as shown in
Figure 10-128, and click Next.

Figure 10-128 Selecting whether volumes in the new consistency group are already synchronized

Note: If the volumes are not synchronized, the initial copy copies the entire source
volume to the remote target volume. If you suspect volumes are different or if you have
doubts, then synchronize them to ensure consistency on both sides of the relationship.

664 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
9. In the last window, select whether you want to start the copy of the consistency group, as
shown in Figure 10-129. Click Finish.

Figure 10-129 Selecting whether the copy should start

Chapter 10. Advanced Copy Services 665


10.9.4 Renaming Remote Copy relationships
To rename one or multiple RC relationships, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationships to be renamed and select Rename, as shown in
Figure 10-130.

Figure 10-130 Renaming Remote Copy relationships

666 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Enter the new name that you want to assign to the relationships and click Rename, as
shown in Figure 10-131.

Figure 10-131 Renaming Remote Copy relationships

Remote Copy relationship name: You can use the letters A - Z and a - z, the numbers 0 -
9, and the underscore (_) character. The RC name can be 1 - 15 characters. No blanks are
allowed.

Chapter 10. Advanced Copy Services 667


10.9.5 Renaming a Remote Copy consistency group
To rename a Remote Copy consistency group, complete the following steps:
1. Select Copy Services → → Remote Copy.
2. Right-click the consistency group to be renamed and select Rename, as shown in
Figure 10-132.

Figure 10-132 Renaming a Remote Copy consistency group

668 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Enter the new name that you want to assign to the consistency group and click Rename,
as shown in Figure 10-133.

Figure 10-133 Entering a new name for a consistency group

Remote Copy consistency group name: You can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The RC name can be 1 - 15 characters.
No blanks are allowed.

Chapter 10. Advanced Copy Services 669


10.9.6 Moving stand-alone Remote Copy relationships to consistency group
To add one or multiple stand-alone relationships to a Remote Copy consistency group,
complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationships to be moved and select Add to Consistency Group, as
shown in Figure 10-134.

Figure 10-134 Adding relationships to a consistency group

670 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Select the consistency group for this RC relationship by using the menu, as shown in
Figure 10-135. Click Add to Consistency Group to confirm your changes.

Figure 10-135 Selecting the consistency group to which to add the relationships

10.9.7 Removing Remote Copy relationships from a consistency group


To remove one or multiple relationships from a Remote Copy consistency group, complete the
following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationships to be removed and select Remove from Consistency
Group, as shown in Figure 10-136.

Figure 10-136 Removing relationships from a consistency group

Chapter 10. Advanced Copy Services 671


3. Confirm your selection and click Remove, as shown in Figure 10-137.

Figure 10-137 Confirming the removal of relationships from a consistency group

10.9.8 Starting Remote Copy relationships


When an RC relationship is created, the RC process can be started. Only relationships that
are not members of a consistency group, or the only relationship in a consistency group can
be started. In any other case, consider starting the consistency group instead.

To start one or multiple relationships, complete the following steps:


1. Select Copy Services → Remote Copy.
2. Right-click the relationships to be started and select Start, as shown in Figure 10-138 on
page 673.

672 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-138 Starting Remote Copy relationships

10.9.9 Starting a Remote Copy consistency group


When a Remote Copy consistency group is created, the RC process can be started for all the
relationships that are part of the consistency groups.

Chapter 10. Advanced Copy Services 673


To start a consistency group, select Copy Services → Remote Copy, right-click the
consistency group to be started, and select Start, as shown in Figure 10-139.

Figure 10-139 Starting a Remote Copy consistency group

10.9.10 Switching a relationship’s copy direction


When an RC relationship is in the Consistent synchronized state, the copy direction for the
relationship can be changed. Only relationships that are not a member of a consistency
group, or the only relationship in a consistency group, can be switched. In any other case,
consider switching the consistency group instead.

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a relationship.

674 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To switch the direction of an RC relationship, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationship to be switched and select Switch, as shown in Figure 10-140.

Figure 10-140 Switching the Remote Copy relationship direction

3. Because the master-auxiliary relationship direction is reversed, write access is disabled


on the new auxiliary volume (former master volume), but it is enabled on the new master
volume (former auxiliary volume). A warning message is displayed, as shown in
Figure 10-141. Click Yes.

Figure 10-141 Switching the master-auxiliary direction of relationships changes write access

Chapter 10. Advanced Copy Services 675


4. When an RC relationship is switched, an icon is displayed in the Remote Copy window, as
shown in Figure 10-142.

Figure 10-142 Switched Remote Copy relationship

10.9.11 Switching a consistency group direction


When a Remote Copy consistency group is in the consistent synchronized state, the copy
direction for the consistency group can be changed.

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a relationship.

To switch the direction of a Remote Copy consistency group, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the consistency group to be switched and select Switch, as shown in
Figure 10-143 on page 677.

676 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-143 Switching a consistency group’s direction

3. Because the master-auxiliary relationship direction is reversed, write access is disabled


on the new auxiliary volume (former master volume), while it is enabled on the new master
volume (former auxiliary volume). A warning message is displayed, as shown in
Figure 10-144. Click Yes.

Figure 10-144 Switching direction of consistency groups changes the write access

10.9.12 Stopping Remote Copy relationships


When an RC relationship is created and started, the RC process can be stopped. Only
relationships that are not members of a consistency group, or the only relationship in a
consistency group, can be stopped. In any other case, consider stopping the consistency
group instead.

Chapter 10. Advanced Copy Services 677


To stop one or multiple relationships, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationships to be stopped and select Stop, as shown in Figure 10-145.

Figure 10-145 Stopping a Remote Copy relationship

678 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. When an RC relationship is stopped, access to the auxiliary volume can be changed so it
can be read and written by a host. A confirmation message is displayed, as shown in
Figure 10-146.

Figure 10-146 Granting access to read and write to the auxiliary volume

Chapter 10. Advanced Copy Services 679


10.9.13 Stopping a consistency group
When a Remote Copy consistency group is created and started, the RC process can be
stopped.

To stop a consistency group, complete the following steps:


1. Select Copy Services → Remote Copy.
2. Right-click the consistency group to be stopped and select Stop, as shown in
Figure 10-147.

Figure 10-147 Stopping a consistency group

680 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. When a Remote Copy consistency group is stopped, access to the auxiliary volumes can
be changed so it can be read and written by a host. A confirmation message is displayed,
as shown in Figure 10-148.

Figure 10-148 Granting access to read and write to the auxiliary volumes

Chapter 10. Advanced Copy Services 681


10.9.14 Deleting Remote Copy relationships
To delete RC relationships, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the relationships that you want to delete and select Delete, as shown in
Figure 10-149.

Figure 10-149 Deleting Remote Copy relationships

682 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. A confirmation message is displayed that prompts the user to enter the number of
relationships to be deleted, as shown in Figure 10-150.

Figure 10-150 Confirmation of relationship deletion

Chapter 10. Advanced Copy Services 683


10.9.15 Deleting a consistency group
To delete a Remote Copy consistency group, complete the following steps:
1. Select Copy Services → Remote Copy.
2. Right-click the consistency group that you want to delete and select Delete, as shown in
Figure 10-151.

Figure 10-151 Deleting a consistency group

3. A confirmation message is displayed, as shown in Figure 10-152 on page 685. Click Yes.

684 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 10-152 Confirmation of a consistency group deletion

Important: Deleting a consistency group does not delete its RC mappings.

10.10 Remote Copy memory allocation


Copy Services features require that small amounts of volume cache be converted from cache
memory into bitmap memory to enable the functions to operate at an I/O group level. If you do
not have enough bitmap space that is allocated when you try to use one of the functions, the
configuration cannot be completed.

The total memory that can be dedicated to these functions is not defined by the physical
memory in the system. The memory is constrained by the software functions that use the
memory.

For every RC relationship that is created on an IBM Spectrum Virtualize system, a bitmap
table is created to track the copied grains. By default, the system allocates 20 MiB of memory
for a minimum of 2 TiB of remote copied source volume capacity.

Every 1 MiB of memory provides volume capacity for the specified I/O group. So, for 256 KiB
grains size, 2 TiB of total MM, GM, or active-active volume capacity is provided.

Table 10-15 shows how to calculate the memory requirements and confirm that your system
can accommodate the total installation size.

Table 10-15 Memory allocation for FlashCopy services


Minimum allocated Default allocated Maximum allocated Minimum function
bitmap space bitmap space bitmap space when using the
default valuesa

0 20 MiB 512 MiB 40 TiB of remote


mirroring volume
capacity
a. RC includes MM, GM, and active-active relationships.

Chapter 10. Advanced Copy Services 685


When you configure GMVC, two internal FlashCopy mappings are created for each change
volume.

For MM, GM, and HyperSwap active-active relationships, two bitmaps exist. For MM/GM
relationships, one is used for the master clustered system and one is used for the auxiliary
system because the direction of the relationship can be reversed. For active-active
relationships, which are configured automatically when HyperSwap volumes are created, one
bitmap is used for the volume copy on each site because the direction of these relationships
can be reversed.

MM/GM relationships do not automatically increase the available bitmap space. You might
need to run the chiogrp command to manually increase the space in one or both of the
master and auxiliary systems.

You can modify the resource allocation for each I/O group of your system. For more
information, see IBM Knowledge Center and expand Command-line interface → Clustered
system commands → chiogrp.

10.11 Troubleshooting Remote Copy


RC (MM and GM) has two primary error codes that are displayed:
򐂰 A 1920 error can be considered as a voluntary stop of a relationship by the system when it
evaluates that the replication will cause errors on the hosts. A 1920 error is a congestion
error. This error means that the source, the link between the source and target, or the
target cannot keep up with the requested copy rate. The system then triggers a 1920 error
to prevent replication from having unwanted effects on hosts.
򐂰 A 1720 error is a heartbeat or system partnership communication error. This error often is
more serious because failing communication between your system partners involves
extended diagnostic time.

10.11.1 1920 error


A 1920 error is deliberately generated by the system and should be considered a control
mechanism. It occurs after 985003 (Unable to find path to disk in the remote cluster
(system) within the time-out period) or 985004 (Maximum replication delay has been
exceeded) events.

It can have several triggers, including the following probable causes:


򐂰 Primary system or SAN fabric problem (10%)
򐂰 Primary system or SAN fabric configuration (10%)
򐂰 Secondary system or SAN fabric problem (15%)
򐂰 Secondary system or SAN fabric configuration (25%)
򐂰 Intercluster link problem (15%)
򐂰 Intercluster link configuration (25%)

In practice, the most often overlooked cause is latency. GM has an RTT tolerance limit of 80
or 250 ms, depending on the firmware version and the hardware model. A message that is
sent from the source IBM Spectrum Virtualize system to the target system and the
accompanying acknowledgment must have a total time of 80 or 250 ms round trip. That is, it
must have up to 40 or 125 ms latency each way.

686 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The primary component of your RTT is the physical distance between sites. For every
1000 kilometers (621.4 miles), you observe a 5 ms delay each way. This delay does not
include the time that is added by equipment in the path. Every device adds a varying amount
of time depending on the device, but a good rule is 25 microseconds (µs) for pure hardware
devices.

For software-based functions (such as compression that is implemented in applications), the


added delay tends to be much higher (usually in the millisecond plus range.) Here is an
example of a physical delay:
򐂰 Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide FCIP to encapsulate the FC traffic between sites.
򐂰 Now, there are seven devices and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 µs of delay each way. The distance adds 9.5 ms each way, for a
total of 19 ms. Combined with the device latency, the delay is 19.4 ms of physical latency
minimum, which is under the 80 ms limit of GM until you realize that this number is the
best case number.

The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link. Therefore, be sure to stay as far beneath the GM RTT
limit as possible. You can easily double or triple the expected physical latency with a lower
quality or lower bandwidth network link. Then, you are within the range of exceeding the limit
if high I/O occurs that exceeds the existing bandwidth capacity.

When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not properly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to enable you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT by using standard 64-byte ping
packets.

The effective transit time must be measured only by using packets that are large enough to
hold an FC frame, or 2148 bytes (2112 bytes of payload and 36 bytes of header). Allow
estimated resource requirements to be a safe amount because various switch vendors have
optional features that might increase this size. After you verify your latency by using the
proper packet size, proceed with normal hardware troubleshooting.

Before proceeding, look at the second largest component of your RTT, which is serialization
delay. Serialization delay is the amount of time that is required to move a packet of data of a
specific size across a network link of a certain bandwidth. The required time to move a
specific amount of data decreases as the data transmission rate increases.

The amount of time in microseconds that is required to transmit a packet across network links
of varying bandwidth capacity is compared. The following packet sizes are used:
򐂰 64 bytes: The size of the common ping packet
򐂰 1500 bytes: The size of the standard TCP/IP packet
򐂰 2148 bytes: The size of an FC frame

Finally, your path MTU affects the delay that is incurred to get a packet from one location to
another location. An MTU might cause fragmentation or be too large and cause too many
retransmits when a packet is lost.

For more information, see IBM Knowledge Center and expand Messages and codes →
Error codes → Error code reference → 1920.

Chapter 10. Advanced Copy Services 687


Note: Unlike 1720 errors, 1920 errors are deliberately generated by the system because it
evaluated that a relationship might impact the host’s response time. The system has no
indication on if/when the relationship can be restarted. Therefore, the relationship cannot
be restarted automatically and it needs to be done manually.

10.11.2 1720 error


The 1720 error (event ID 050020) is the other problem RC might encounter. The amount of
bandwidth that is needed for system-to-system communications varies based on the number
of nodes. It must not be zero. When a partner on either side stops communication, a 1720
error is displayed in your error log. According to the product documentation, there are no
likely field-replaceable unit (FRU) breakages or other causes.

The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O group if your fabric has
more than 64 HBA ports zoned. The suggested zoning configuration for fabrics is one port for
each node per I/O group per fabric that is associated with the host.

For those fabrics with 64 or more host ports, this suggestion becomes a rule. Therefore, you
see four paths to each volume discovered on the host because each host must have at least
two FC ports from separate HBA cards, each in a separate fabric. On each fabric, each host
FC port is zoned to two IBM FlashSystem 9100 N_Ports where each N_Port comes from a
different IBM FlashSystem 9100 node. This configuration provides four paths per volume.
More than four paths per volume are supported but not recommended.

Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer with IBM Spectrum Control and comparing
it against your sample interval reveals potential SAN congestion. If a zero buffer credit timer is
more than 2% of the total time of the sample interval, it might cause problems.

Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences might indicate a larger problem.

If you receive multiple 1720 errors, recheck your network connection and then check the
system partnership information to verify its status and settings. Then, perform diagnostics for
every piece of equipment in the path between you two IBM FlashSystem 9100 systems. It
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.

Note: With Consistency Protection enabled on the MM/GM relationships, the system tries
to resume the replication when possible. Therefore, it is not necessary to manually restart
the failed relationship after a 1720 error is triggered.

For more information, see IBM Knowledge Center and expand Messages and codes →
Error codes → Error code reference → 1720.

If your investigations fail to resolve your RC problems, contact your IBM Support
representative for a complete analysis.

688 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
11

Chapter 11. Ownership groups


The Ownership Groups feature, or object-based access control (OBAC), provides a method
of implementing a multi-tenant solution on the IBM FlashSystem 9100, IBM FlashSystem
9200, IBM FlashSystem 7200, and IBM FlashSystem 5100 systems. The Ownership Group
principles of operations and implementation steps are provided in this chapter.

This chapter includes the following sections:


򐂰 11.1, “Ownership groups principles of operations” on page 690
򐂰 11.2, “Implementing ownership groups on a new system” on page 692
򐂰 11.3, “Migrating existing objects to ownership groups” on page 697

© Copyright IBM Corp. 2020. All rights reserved. 689


11.1 Ownership groups principles of operations
Ownership groups enable the allocation of storage resources to several independent tenants
with the assurance that one tenant cannot access resources that are associated with another
tenant.

Ownership groups restrict access for users in the ownership group to only those objects that
are defined within that ownership group. An owned object can belong to one ownership group.
Users in an ownership group are restricted to viewing and managing objects within their
ownership group. Users that are not in an ownership group can continue to view or manage
all the objects on the system based on their defined user role, including objects within
ownership groups.

Only users with Security Administrator roles (for example, superuser) can configure and
manage ownership groups.

The system supports several resources that you assign to ownership groups:
򐂰 Child pools
򐂰 Volumes
򐂰 Volume groups
򐂰 Hosts
򐂰 Host clusters
򐂰 Host mappings
򐂰 IBM FlashCopy mappings
򐂰 FlashCopy consistency groups

An owned object can belong to only one ownership group. An owner is a user with an
ownership group that can view and manipulate objects within that group.

Before you create ownership groups and assign resources and users, review the following
guidelines:
򐂰 Users can be in only one ownership group at a time (applies to both local and remotely
authenticated users).
򐂰 Objects can be within at most one ownership group.
򐂰 Global resources, such as drives, enclosures, and arrays, cannot be assigned to
ownership groups.
򐂰 Global users that do not belong to an ownership group can view and manage (depending
on their user role) all resources on the system, including the ones that belong to an
ownership group, and users within an ownership group.
򐂰 Users within an ownership group cannot have the Security Administrator role. All Security
Administrator role users are global users.
򐂰 Users within an ownership group can view or change resources within the ownership
group in which they belong.

690 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Users within an ownership group cannot change any objects outside of their ownership
group. This restriction includes global resources that are related to resources within the
ownership group. For example, a user can change a volume in the ownership group, but
not the drive that provides the storage for that volume.
򐂰 Users within an ownership group cannot view or change resources if those resources are
assigned to another ownership group or are not assigned to any ownership group.
However, users within ownership groups can view and display global resources. For
example, users can display information on drives on the system because drives are a
global resource that cannot be assigned to any ownership group.

When a user group is assigned to an ownership group, the users in that user group retain
their role but are restricted to only those resources that belong to the same ownership group.
The role that is associated with a user group can define the permitted operations on the
system, and the ownership group can further limit access to individual resources. For
example, you can configure a user group with the Copy Operator role, which limits user
access to FlashCopy operations. Access to individual resources, such as a specific
FlashCopy consistency group, can be further restricted by assigning it to an ownership group.

A child pool is a key requirement for the Ownership Groups feature. By defining a child pool
and assigning it to an ownership group, the system administrator provides capacity for
volumes that ownership group users can create or manage. Child pools are supported only
with standard pools. You cannot create a child pool for a Data Reduction Pool (DRP).

Depending on the type of resource, the owning group for the resource can be defined
explicitly or inherited from explicitly defined objects. For example, a child pool needs an
ownership group parameter to be set by a system administrator. But volumes that are created
in that child pool automatically inherit the ownership group from a child pool. For more
information about ownership inheritance, see IBM Knowledge Center and expand Product
overview → Technical overview → Ownership groups.

When the user logs on to the management GUI or command-line interface (CLI), only
resources that they have access to through the ownership group are available. Additionally,
only events and commands that are related to the ownership group in which a user belongs
are viewable by those users.

Chapter 11. Ownership groups 691


11.2 Implementing ownership groups on a new system
This section describes ownership group implementation process for a new system that has no
volumes and users that must be migrated to ownership groups.

11.2.1 Creating an ownership group


To create the first ownership group, select Access → Ownership Groups, as shown in
Figure 11-1. Enter a name for the ownership group, and click Create Ownership Group.

Figure 11-1 Creating the first ownership group

After the first group is created, the window changes to ownership group mode, as shown in
Figure 11-2. The new ownership group has no user groups and no resources that are
assigned to it.

To create more ownership groups, click Create Ownership Group.

Figure 11-2 Ownership groups management window

11.2.2 Assigning users to an ownership group


To create accounts for users that use ownership group resources, the user group must be
created and assigned to the ownership group. To create a user group, select Access →
Users by Group, and click Create User Group. The Create User Group window opens, as
shown in Figure 11-3 on page 693. Specify the User Group name, select an ownership group
to tie this user group to, and select a role for the users in this group.

For a description of user roles, see IBM Knowledge Center and expand Product overview →
Technical overview → User roles.

692 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To create volume, host, and other objects in an ownership group, users must have an
Administrator or Restricted Administrator role. Users with the Security Administrator role
cannot be assigned to an ownership group.

You may also set up a user group to use remote authentication, if it is enabled. To do so,
select the Lightweight Directory Access Protocol (LDAP) check box.

Figure 11-3 Creating and assigning a user group

Note: Users that use LDAP can belong to multiple user groups, but belong to only one
ownership group that is associated with one of the user groups.

If remote authentication is not configured, you must create a user (or users) and assign it to a
created user group, as shown in Figure 11-4.

Figure 11-4 Creating a user

Chapter 11. Ownership groups 693


You can manage user groups that are assigned to an ownership group by selecting
Access → Ownership Groups, as shown in Figure 11-5. To assign a user group that exists
but is not assigned to any ownership group, click Assign User Group. To unassign user
group, click the ... icon next to the assigned user group name.

Figure 11-5 Unassigning user groups

Multiple user groups with different user roles may be assigned to one ownership group. For
example, you may create and assign a user group with the Monitor role in addition to a group
with the Administrator role to have two sets of users with different privilege levels accessing
an ownership group’s resources.

11.2.3 Creating ownership group resources


To be able to create ownership group volumes and other resources, a child pool must be
created and assigned to the ownership group. To do this task, select Pools → Pools,
right-click a parent pool that is designated as a container for child pools, and click Create
Child Pool, as shown in Figure 11-6.

Figure 11-6 Creating a child pool

When creating a child pool, specify an ownership group for it and assign a part of the parent’s
pool capacity, as shown in Figure 11-7. Ownership group objects can use only capacity that is
provisioned for them with the child pool.

Figure 11-7 Creating a child pool and assigning it to an ownership group

Multiple child pools that are created from the same or different parent pools can be assigned
to a single ownership group.

694 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After a child pool is created and assigned, the ownership group management window, which
you open by selecting Access → Ownership Groups, changes to show the assigned and
available resources, as shown in Figure 11-8.

Figure 11-8 Ownership group management window

Any volumes that are created on a child pool that is assigned to an ownership group inherits
ownership from the child pool.

After a child pool and user group are assigned to an ownership group, ownership group
administrators can log in with their credentials and start creating volumes, host and host
clusters, or FlashCopy mappings. For more information about creating those objects, see
Chapter 6, “Volumes” on page 277, Chapter 7, “Hosts” on page 369, and Chapter 10,
“Advanced Copy Services” on page 505.

Although an ownership group administrator can create objects only within the resources that
are assigned to them, the system administrator can create, monitor, and assign objects for
any ownership group.

11.2.4 Listing ownership group resources


By default, the Ownership Group attribute is not enabled in the GUI windows that list
volumes and other objects that can be owned. For convenience, the system administrator can
enable this attribute, as shown in Figure 11-9.

Figure 11-9 Enabling the ownership group attribute display

Chapter 11. Ownership groups 695


For example, the volume listing for a system administrator looks like Figure 11-10.

Figure 11-10 Listing volumes for all ownership groups

The global system administrator can see and manage the resources of all ownership groups
and resources that are not assigned to any groups.

When the ownership group user logs in, they can see and manage only resources that are
assigned to their group. Figure 11-11 shows the initial login window for an ownership group
user with the Administrator role.

This user does not see a dashboard with global system performance and capacity
parameters, but instead can see only tiles for their existing ownership group resources. Out of
eight volumes that are configured on a system and shown in Figure 11-10, they can see and
manage only three volumes that belong to the group.

Figure 11-11 Ownership group administrator view

The ownership group user can use the GUI to browse, create, and delete (depending on their
user role) resources that are assigned to their group. To see information about the global
resources (for example, list MDisks or arrays on the pool), they must use the CLI. Ownership
group users cannot manage global resources, but can only view them.

11.2.5 Actions on ownership groups


The global system administrator can rename or remove an ownership group. To do so, select
Access → Ownership Groups, click the “...” icon next to the group name, and select the
required task, as shown in Figure 11-12.

Figure 11-12 Renaming or removing an ownership group

696 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
When an ownership group is removed by using the GUI, all ownership assignment
information for all the objects of the ownership group is removed, but the objects remain
configured. Only the system administrator can manage those resources afterward.

11.3 Migrating existing objects to ownership groups


If you want to use ownership groups for objects that exist on your system, you must
reconfigure certain resources if you want to configure ownership groups.

If child pools are on the system, you can define an ownership group to the child pool or child
pools. Before you define an ownership group to existing child pools, determine other related
objects that you want to migrate. Any volumes that are currently in the child pool inherit the
ownership group that is defined for the child pool.

If no child pools are on the system, you must create child pools and move any volumes to
those child pools before you can assign them to ownership groups. If volumes currently are in
a parent pool, volume mirroring can be used to create copies of the volume within the child
pool. Alternatively, volume migration can be used to relocate a volume from a parent pool to a
child pool within that parent pool without requiring copying.

To non-disruptively migrate a volume to become an ownership group object, complete the


following steps:
1. Create a child pool. Do not assign it to an ownership group yet.
2. Migrate volumes that must be assigned to an ownership group to that child pool by using
the volume migration or volume mirroring function.
Figure 11-13 shows the NonOwnedVol1 volume, which is in a parent pool and does not
belong to any ownership group. This volume is mapped to a Small Computer System
Interface (SCSI) host.
To start migration, right-click the volume and click Add Volume Copy. You can also use
Migrate to Another Pool, buy this method is suitable only if you are migrating from a pool
with the same extent size (for example, from a parent pool to a child pool of the same
pool), and provides less flexibility.

Figure 11-13 Adding volume copy for migration

Chapter 11. Ownership groups 697


On the Volume Copy window, select the child pool that will be assigned to an ownership
group, as shown in Figure 11-14.

Figure 11-14 Migrating to a child pool

3. Repeat step 2 on page 697 for all volumes that must belong to an ownership group, and
then remove the source copies.
4. Create an ownership group as described in 11.2.1, “Creating an ownership group” on
page 692. Assign a user group to it, as described in 11.2.2, “Assigning users to an
ownership group” on page 692.
5. As shown in Figure 11-15, in Access → Ownership Groups, select the wanted
ownership group and click Assign Child Pool.

Figure 11-15 Assigning a child pool to an ownership group

Select a child pool to assign, as shown in Figure 11-16.

Figure 11-16 Selecting a child pool to assign

After you click Next, the system notifies you that there are more resources that will inherit
ownership from a volume, and because the volume is mapped to a host, the host will
become an ownership group object, as shown in Figure 11-17 on page 699.

698 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 11-17 Additional resources to add

6. As shown in Figure 11-18, a volume and a host both belong to an ownership group. As a
host and a volume are in a group, host mapping inherits ownership and becomes a part of
an ownership group too.

Figure 11-18 Resources of an ownership group

Now, a child pool is assigned to an ownership group. If you must migrate more volumes to the
child pool later, the same approach can be used. However, during migration one volume copy
is in an owned child pool, and the original copy remains in an unowned parent pool. Such a
condition causes inconsistent ownership, as shown in Figure 11-19.

Figure 11-19 Example of inconsistent volume ownership

Until the inconsistent volume ownership is resolved, the volume does not belong to an
ownership group and cannot be seen or managed by an ownership group administrator. To
resolve it, delete one of the copies after both are synchronized.

Chapter 11. Ownership groups 699


700 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12

Chapter 12. Encryption


Encryption protects against the potential exposure of sensitive user data that is stored on
discarded, lost, or stolen storage devices. For storage devices that use IBM Spectrum
Virtualize, it supports optional encryption of data at-rest.

This chapter includes the following topics:


򐂰 12.2, “Planning for encryption” on page 703
򐂰 12.3, “Defining encryption of data at-rest” on page 703
򐂰 12.4, “Activating encryption” on page 708
򐂰 12.5, “Enabling encryption” on page 718
򐂰 12.6, “Configuring more providers” on page 743
򐂰 12.7, “Migrating between providers” on page 748
򐂰 12.8, “Recovering from a provider loss” on page 752
򐂰 12.9, “Using encryption” on page 752
򐂰 12.10, “Rekeying an encryption-enabled system” on page 761
򐂰 12.11, “Disabling encryption” on page 767

© Copyright IBM Corp. 2020. All rights reserved. 701


12.1 General types of encryption across IBM Spectrum
Virtualize
Within IBM Spectrum Virtualize SAN Volume Controller and IBM FlashSystem, there are
essentially three different types of encryption:
򐂰 Externally virtualized storage
򐂰 Serial-attached SCSI internal storage
򐂰 Non-Volatile Memory Express internal storage

12.1.1 Externally virtualized storage


Data is decrypted and encrypted as read/write I/Os are issued to the external storage. You
can have an encryption key per storage pool or per child storage pool. Migrating a volume
between pools (by using volume mirroring) can be used as a technique for encrypting and
decrypting the data.

The key per pool (and allowing different keys for child pools) supports some part of the
multi-tenant use case (if you delete a pool, you delete the key and cryptoerase the data), but
all the keys are wrapped and protected by a single master key that is obtained either from a
USB stick or an external key server.

As a special case, it is possible to turn off encryption for individual MDisks within the storage
pool, which means that if an external storage controller supports encryption that you can
choose to allow it to encrypt the data instead.

12.1.2 Serial-attached SCSI internal storage


Data is decrypted and encrypted by the serial-attached Small Computer System Interface
(SCSI) (SAS) controller. There is an encryption key per redundant array of independent disks
(RAID) array. Normally, all arrays in a storage pool are encrypted to form an encrypted
storage pool. You can create child storage pools, but there is still only one key per RAID array.
Multi-tenancy is possible only if you have more than one array and storage pool, which
usually is not practical.

You can migrate volumes from a non-encrypted storage pool to an encrypted storage pool, or
you can add an encrypted array to a storage pool and then delete the unencrypted array
(which migrates all the data automatically) as a way of encrypting data.

12.1.3 Non-Volatile Memory Express internal storage


Data is decrypted and encrypted by the Non-Volatile Memory Express (NVMe) drives. Each
drive has a media encryption key that is wrapped and protected by an encryption key per
RAID array. So, it has the same properties as SAS internal storage.

A storage pool can include a mixture of two or all three types of storage. In this case, the SAS
and NVMe internal storage use a key per RAID array for encryption, and the externally
virtualized storage uses the pool level key. Because it is almost impossible to control exactly
what storage is used for each volume, from a security viewpoint you effectively have a single
key for the whole pool, and a cryptographic erase is possible only by deleting the entire
storage pool and arrays.

702 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.2 Planning for encryption
Data-at-rest encryption is a powerful tool that can help organizations protect the
confidentiality of sensitive information. However, encryption, like any other tool, must be used
correctly to fulfill its purpose.

Multiple drivers exist for an organization to implement data-at-rest encryption. These drivers
can be internal, such as protection of confidential company data and ease of storage
sanitization, or external, such as compliance with legal requirements or contractual
obligations.

Therefore, before configuring encryption on storage, the organization defines its needs and if
it decides that data-at-rest encryption is required, it includes it in the security policy. Without
defining the purpose of the particular implementation of data-at-rest encryption, it is difficult or
impossible to choose the best approach to implement encryption and verify whether the
implementation meets the set of goals.

The following items are worth considering during the design of a solution that includes
data-at-rest encryption:
򐂰 Legal requirements
򐂰 Contractual obligations
򐂰 Organization's security policy
򐂰 Attack vectors
򐂰 Expected resources of an attacker
򐂰 Encryption key management
򐂰 Physical security

Multiple regulations mandate data-at-rest encryption, from processing of Sensitive Personal


Information to the guidelines of the Payment Card Industry. If any regulatory or contractual
obligations govern the data that is held on the storage system, they often provide a wide and
detailed range of requirements and characteristics that must be realized by that system. Apart
from mandating data-at-rest encryption, these documents might contain requirements
concerning encryption key management.

Another document that should be consulted when planning data-at-rest encryption is the
organization’s security policy.

The outcome of a data-at-rest encryption planning session answers the following questions:
1. What are the goals that the organization wants to realize by using data-at-rest encryption?
2. How will data-at-rest encryption be implemented?
3. How can it be demonstrated that the proposed solution realizes the set of goals?

12.3 Defining encryption of data at-rest


Encryption is the process of encoding data so that only authorized parties can read it. Secret
keys are used to encode the data according to well-known algorithms.

Encryption of data-at-rest as implemented in IBM Spectrum Virtualize is defined by the


following characteristics:
򐂰 Data-at-rest means that the data is encrypted on the end device (drives).
򐂰 The algorithm that is used is the Advanced Encryption Standard (AES) US government
standard from 2001.

Chapter 12. Encryption 703


򐂰 Encryption of data at-rest complies with the Federal Information Processing Standard
140-2 (FIPS-140-2) standard.
򐂰 AES 256 is used for master access keys.
򐂰 XTS-AES 256 encryption is a FIPS 140-2 compliant algorithm.
򐂰 XTS-AES-256 is used for data encryption.
򐂰 The algorithm is public. The only secrets are the keys.
򐂰 A symmetric key algorithm is used. The same key is used to encrypt and decrypt data.

The encryption of system data and metadata is not required, so they are not encrypted.

12.3.1 Encryption methods


There are two types of encryption on devices running IBM Spectrum Virtualize: hardware
encryption and software encryption. Both methods of encryption protect against the potential
exposure of sensitive user data that is stored on discarded, lost, or stolen media. Both can
also facilitate the warranty return or disposal of hardware.

Which method that is used for encryption is chosen automatically by the system based on the
placement of the data:
򐂰 Hardware encryption: Data is encrypted by using SAS hardware. Used only for internal
storage (drives).
򐂰 Software encryption: Data is encrypted by using the node’s CPU (encryption code uses
the AES-NI CPU instruction set). Used only for external storage.

Note: Software encryption is available in IBM Spectrum Virtualize V7.6 and later.

Both methods of encryption use the same encryption algorithm, key management
infrastructure, and license.

Note: The design for encryption is based on the concept that a system is encrypted or not
encrypted. Encryption implementation is intended to encourage solutions that contain only
encrypted volumes or only unencrypted volumes. For example, after encryption is enabled
on the system, all new objects (for example, pools) are by default created as encrypted.

12.3.2 Encrypted data


IBM Spectrum Virtualize performs data-at-rest encryption, which is the process of encrypting
data that is stored on the end devices, such as physical drives.

Data is encrypted or decrypted when it is written to or read from internal drives (hardware
encryption) or external storage systems (software encryption).

So, data is encrypted when transferred across the storage area network (SAN) only between
IBM Spectrum Virtualize systems and external storage. Data in transit is not encrypted when
transferred on SAN interfaces under the following circumstances:
򐂰 Server-to-storage data transfer
򐂰 Remote Copy (RC) (for example, Global or Metro Mirror (MM)
򐂰 Intracluster communication

704 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: Only data-at-rest is encrypted. Host to storage communication and data that is sent
over links that are used for Remote Mirroring are not encrypted.

Figure 12-1 shows an encryption example. Encrypted disks and encrypted data paths are
marked in blue. Unencrypted disks and data paths are marked in red. The server sends
unencrypted data to an IBM SAN Volume Controller 2145-DH8 system, which stores
hardware-encrypted data on internal disks. The data is mirrored to a remote Storwize V7000
Gen1 system by using RC. The data flowing through the RC link is not encrypted. Because
the Storwize V7000 Gen1 (2076-324) system cannot perform any encryption activities, data
on the Storwize V7000 Gen1 is not encrypted.

Server

Remote Copy
2145-DH8 2076-324
SAS Hardware SAS
Encryption
2145-24F 2076-224

2145-24F 2076-224

Figure 12-1 Encryption on a single site

To enable encryption of both data copies, the Storwize V7000 Gen1 system must be replaced
by an encryption capable (with optional encryption enabled) IBM Spectrum Virtualize system,
as shown in Figure 12-2. After the replacement, both copies of data are encrypted, but the
RC communication between both sites remains unencrypted.

Server

Remote Copy
2145-DH8 2076-524
SAS Hardware SAS
Encryption
2145-24F 2076-24F

2145-24F 2076-24F

Figure 12-2 Encryption on both sites

Chapter 12. Encryption 705


Figure 12-3 shows an example configuration that uses software and hardware encryption.
Software encryption is used to encrypt an external virtualized storage system (2076-324 in
Figure 12-3). Hardware encryption is used for internal, SAS-attached disk drives.

Server

2145-SV1
Software SAS Hardware
FC
Encryption Encryption
2145-24F
2076-324
2145-24F

Figure 12-3 Example of software encryption and hardware encryption

The placement of hardware encryption and software encryption in the IBM Spectrum
Virtualize code stack are shown in Figure 12-4. As compression is performed before
encryption, it is possible to get benefits of compression for the encrypted data.

Figure 12-4 Encryption placement in the IBM Spectrum Virtualize Software stack

706 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Each volume copy can use different encryption methods (hardware and software). It also may
have volume copies with different encryption status (encrypted versus unencrypted). The
encryption method depends only on the pool that is used for the specific copy. You can
migrate data between different encryption methods by using volume migration or volume
mirroring.

12.3.3 Encryption keys


Hardware and software encryption use the same encryption key infrastructure. The only
difference is the object that is encrypted by using the keys. The following objects can be
encrypted:
򐂰 Pools (software encryption)
򐂰 Child pools (software encryption)
򐂰 Arrays (hardware encryption)

Consider the following points regarding encryption keys:


򐂰 Keys are unique for each object, and they are created when the object is created.
򐂰 Two types of keys are defined in the system:
– Master access key:
• The master access key is created when encryption is enabled.
• The master access key can be stored on USB flash drives, key servers, or both.
One master access key is created for each enabled encryption key provider.
• It can be copied or backed up as necessary.
• It is not permanently stored anywhere in the system.
• It is required at boot time to unlock access to encrypted data.
– Data encryption keys (one for each encrypted object):
• Data encryption keys are used to encrypt data. When an encrypted object (such as
an array, a pool, or a child pool) is created, a new data encryption key is generated
for this object.
• Managed disks (MDisks) that are not self-encrypting are automatically encrypted by
using the data encryption key of the pool or child pool to which they belong.
• MDisks that are self-encrypting are not reencrypted by using the data encryption
key of the pool or child pool they belong to by default. You can override this default
by manually configuring the MDisk as not self-encrypting.
• Data encryption keys are stored in secure memory.
• During cluster internal communication, data encryption keys are encrypted with the
master access key.
• Data encryption keys cannot be viewed.
• Data encryption keys cannot be changed.
• When an encrypted object is deleted, its data encryption key is discarded (secure
erase).

Chapter 12. Encryption 707


Important: If all master access key copies are lost and the system must cold-restart, all
encrypted data is gone. No method exists, even for IBM, to decrypt the data without the
keys. If encryption is enabled and the system cannot access the master access key, all
SAS hardware is offline, including unencrypted arrays.

Note: A self-encrypting MDisk is an MDisk from an encrypted volume in an external


storage system.

12.3.4 Encryption licenses


Encryption is a licensed feature that uses key-based licensing. A license must be present for
each control enclosure in the system before you can enable encryption.

If you add a control enclosure to a system that has encryption that is enabled, the control
enclosure must also be licensed.

No trial licenses for encryption exist because when the trial runs out the access to the data is
lost. Therefore, you must purchase an encryption license before you activate encryption.
Licenses are generated by IBM Data Storage Feature Activation (DSFA) based on the serial
number (S/N) and the machine type and model (MTM) of the control enclosure.

You can activate an encryption license during the initial system setup (on the Encryption
window of the initial setup wizard) or later on in the running environment.

To purchase an encryption license, contact your IBM marketing representative or


IBM Business Partner.

12.4 Activating encryption


Encryption is enabled at a system level, and all the following prerequisites must be met to use
encryption:
򐂰 You must purchase an encryption license before you activate the function.
If you did not purchase a license, contact an IBM marketing representative or
IBM Business Partner to purchase an encryption license.
򐂰 At least three USB flash drives are required if you plan to not use a key management
server. They are available as a feature code from IBM.
򐂰 You must activate the license that you purchased.
򐂰 Encryption must be enabled.

Activation of the license can be performed in one of two ways:


򐂰 Automatic activation: Used when you have the authorization code, and the workstation
that is being used to activate the license has access to external network. In this case, you
must enter only the authorization code. The license key is automatically obtained from the
internet and activated in the IBM Spectrum Virtualize system.
򐂰 Manual activation: If you cannot activate the license automatically because any of the
requirements are not met, you can follow the instructions that are provided in the GUI to
obtain the license key from the web and activate it in the IBM Spectrum Virtualize system.

Both methods are available during the initial system setup and when the system is in use.

708 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.4.1 Obtaining an encryption license
You must purchase an encryption license before you activate encryption. If you did not
purchase a license, contact an IBM marketing representative or IBM Business Partner to
purchase an encryption license.

When you purchase a license, you receive a function authorization document with an
authorization code that is printed on it. With this code, you may proceed with the automatic
activation process.

If the automatic activation process fails or if you prefer to use the manual activation process,
see IBM Data Storage Feature Activation to retrieve your license keys.

Ensure that you have the following information:


򐂰 Machine type (MT)
򐂰 Serial number (S/N)
򐂰 Machine signature
򐂰 Authorization code

For more information about how to retrieve the machine signature of a control enclosure, see
12.4.5, “Activating the license manually” on page 716.

12.4.2 Starting the activation process during the initial system setup
One of the steps in the initial setup enables the encryption license activation. The system
asks “Was the encryption feature purchased for this system?”. To activate encryption at
this stage, complete the following steps:
1. Select Yes, as shown in Figure 12-5.

Figure 12-5 Encryption activation during the initial system setup

Chapter 12. Encryption 709


The Encryption window displays information about your storage system, as shown in
Figure 12-6.

Figure 12-6 Information storage system during the initial system setup

2. Right-click the control enclosure to open a menu with two license activation options
(Activate License Automatically and Activate License Manually), as shown in
Figure 12-7. Use either option to activate encryption. For more information about how to
complete the automatic activation process, see 12.4.4, “Activating the license
automatically” on page 713. For more information about how to complete a manual
activation process, see 12.4.5, “Activating the license manually” on page 716.

Figure 12-7 Selecting the license activation method

710 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. After either activation process is complete, you can see a green check mark in the column
that is labeled Licensed next to a control enclosure for which the license was enabled.
You can proceed with the initial system setup by clicking Next, as shown in Figure 12-8.

Note: Every enclosure needs an active encryption license before you can enable
encryption on the system. Attempting to add a non-licensed enclosure to an
encryption-enabled system fails.

Figure 12-8 Successful encryption license activation during the initial system setup

Chapter 12. Encryption 711


12.4.3 Starting the activation process on a running system
To activate encryption on a running system, complete the following steps:
1. Select Settings → System → Licensed Functions.
2. Click Encryption Licenses, as shown in Figure 12-9.

Figure 12-9 Expanding the Encryption Licenses section on the Licensed Functions window

3. The Encryption Licenses window displays information about your control enclosures.
Right-click the enclosure on which you want to install an encryption license. This action
opens a menu with two license activation options (Activate License Automatically and
Activate License Manually), as shown in Figure 12-10. Use either option to activate
encryption. For more information about how to complete an automatic activation process,
see 12.4.4, “Activating the license automatically” on page 713. For more information about
how to complete a manual activation process, see 12.4.5, “Activating the license
manually” on page 716.

Figure 12-10 Selecting the Control Enclosure on which you want to enable the encryption

712 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After either activation process is complete, you can see a green check mark in the column
that is labeled Licensed for the control enclosure, as shown in Figure 12-11.

Figure 12-11 Successful encryption license activation on a running system

12.4.4 Activating the license automatically


The automatic license activation is the faster method to activate the encryption license for
IBM Spectrum Virtualize. You need the authorization code and the workstation that is used to
access the GUI that can access the external network.

Important: To perform this operation, the PC that was used to connect to the GUI and
activate the license must connect to the internet.

To activate the encryption license for a control enclosure automatically, complete the following
steps:
1. Click Activate License Automatically to open the Activate License Automatically
window, as shown in Figure 12-12.

Figure 12-12 Encryption license Activate License Automatically window

Chapter 12. Encryption 713


2. Enter the authorization code that is specific to the control enclosure that you selected, as
shown in Figure 12-13. Click Activate.

Figure 12-13 Entering an authorization code

The system connects to IBM to verify the authorization code and retrieve the license key.
Figure 12-14 shows a window that is displayed during this connection. If everything works
correctly, the procedure takes less than a minute.

Figure 12-14 Activating encryption

After the license key is retrieved, it is automatically applied, as shown in Figure 12-15.

Figure 12-15 Successful encryption license activation

714 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Problems with automatic license activation
If connection problems occur with the automatic license activation procedure, the system
times out after 3 minutes with an error.

Check whether the PC that is used to connect to the IBM FlashSystem GUI and activate the
license can access the internet. If you cannot complete the automatic activation procedure,
use the manual activation procedure that is described in 12.4.5, “Activating the license
manually” on page 716.

Although authorization codes and encryption license keys use the same format (four groups
of four hexadecimal digits), you can use each of them only in the appropriate activation
process. If you use a license key when the system expects an authorization code, the system
displays an error message, as shown in Figure 12-16.

Figure 12-16 Authorization code failure

Chapter 12. Encryption 715


12.4.5 Activating the license manually
To manually activate the encryption license for a control enclosure, complete the following
steps:
1. Click Activate License Manually to open the Manual Activation window, as shown in
Figure 12-17.

Figure 12-17 Manual encryption license activation window

2. If you have not done so, obtain the encryption license for the control enclosure. The
information that is required to obtain the encryption license is displayed in the Manual
Activation window. Use this data to follow the instructions in 12.4.1, “Obtaining an
encryption license” on page 709.

716 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. You can enter the license key either by typing it, pasting it, or clicking the folder icon and
uploading the license key file to the storage system that was downloaded from DSFA. In
Figure 12-18, the sample key is entered. Click Activate.

Figure 12-18 Entering an encryption license key

After the task completes successfully, the GUI shows that encryption is licensed for the
specified control enclosure, as shown in Figure 12-19.

Figure 12-19 Successful encryption license activation

Chapter 12. Encryption 717


Problems with manual license activation
Although authorization codes and encryption license keys use the same format (four groups
of four hexadecimal digits), you can use each of them only in the appropriate activation
process. If you use an authorization code when the system expects a license key, the system
displays an error message, as shown in Figure 12-20.

Figure 12-20 License key failure

12.5 Enabling encryption


This section describes the process to create and store system master access key copies,
which are also known as encryption keys. These keys can be stored on any or both of two key
providers: USB flash drives or a key server.

Two types of key servers are supported by IBM Spectrum Virtualize:


򐂰 IBM Security Key Lifecycle Manager (IBM SKLM), which was introduced in IBM Spectrum
Virtualize V7.8.
򐂰 Gemalto SafeNet KeySecure, which was introduced in IBM Spectrum Virtualize V8.2.

For a list of supported key servers, see Supported Key Servers - IBM Spectrum Virtualize.

IBM Spectrum Virtualize V8.1 introduced the ability to define up to four encryption key
servers, which is a preferred configuration because it increases key provider availability. In
this version, support for simultaneous use of both USB flash drives and key server was
added.

Organizations that use encryption key management servers might consider parallel use of
USB flash drives as a backup solution. During normal operation, such drives can be
disconnected and stored in a secure location. However, during a catastrophic loss of
encryption servers, the USB drives can still be used to unlock the encrypted storage.

718 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The key server and USB flash drive characteristics that are described next might help you to
choose the type of encryption key provider that you want to use.

Key servers can have the following characteristics:


򐂰 Physical access to the system is not required to perform a rekey operation.
򐂰 Support for businesses that have security requirements that preclude the use of USB
ports.
򐂰 Possibility to use hardware security modules (HSMs) for encryption key generation.
򐂰 Ability to replicate keys between servers and perform automatic backups.
򐂰 Implementations follow an open standard (Key Management Interoperability Protocol
(KMIP)) that aids in interoperability.
򐂰 Ability to audit operations that are related to key management.
򐂰 Ability to separately manage encryption keys and physical access to storage systems.

USB flash drives have the following characteristics:


򐂰 Physical access to the system might be required to process a rekey operation.
򐂰 No moving parts with almost no read or write operations to the USB flash drive.
򐂰 Inexpensive to maintain and use.
򐂰 Convenient and easy to have multiple identical USB flash drives available as backups.

Important: Maintaining confidentiality of the encrypted data hinges on the security of the
encryption keys. Pay special attention to ensure secure creation, management, and
storage of the encryption keys.

12.5.1 Starting the Enable Encryption wizard


After the license activation step completes on IBM FlashSystem Control Enclosures, you can
now enable encryption. You can enable encryption after completing the initial system setup by
using the GUI or CLI.

There are two ways in the GUI to start the Enable Encryption wizard:
򐂰 It can be started by clicking Run Task next to Enable Encryption on the Suggested Tasks
window, as shown in Figure 12-21.

Figure 12-21 Enable Encryption from the Suggested Tasks window

Chapter 12. Encryption 719


򐂰 You can select Settings → Security → Encryption, and the click Enable Encryption, as
shown in Figure 12-22.

Figure 12-22 Enable Encryption from the Security pane

The Enable Encryption wizard starts by prompting you to select the encryption key provider to
use for storing the encryption keys, as shown in Figure 12-23. You can enable either or both
providers.

Figure 12-23 Enable Encryption wizard Welcome window

720 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The next section presents a scenario in which both encryption key providers are enabled
concurrently.

For more information about how to enable encryption by using only USB flash drives, see
12.5.2, “Enabling encryption by using USB flash drives” on page 721.

For more information about how to enable encryption by using key servers as the sole
encryption key provider, see 12.5.3, “Enabling encryption by using key servers” on page 726.

12.5.2 Enabling encryption by using USB flash drives

Note: The system needs at least three USB flash drives before you can enable encryption
by using this encryption key provider. IBM USB flash drives are preferred and can be
obtained from IBM with the Feature Code Encryption USB flash drives (Four Pack). Other
flash drives might also work. You can use any USB ports in any node of the cluster.

Using USB flash drives as the encryption key provider requires a minimum of three USB flash
drives to store the generated encryption keys. Because the system attempts to write the
encryption keys to any USB key that is inserted into a node port (N_Port), it is critical to
maintain physical security of the system during this procedure.

While the system enables encryption, you are prompted to insert USB flash drives into the
system. The system generates and copies the encryption keys to all available USB flash
drives.

Ensure that each copy of the encryption key is valid before you write any user data to the
system. The system validates any key material on a USB flash drive when it is inserted into
the canister. If the key material is not valid, the system logs an error. If the USB flash drive is
unusable or fails, the system does not display it as output. Figure 12-26 on page 724 shows
an example where the system detected and validated three USB flash drives.

If your system is in a secure location with controlled access, one USB flash drive for each
canister can remain inserted in the system. If a risk of unauthorized access exists, all USB
flash drives with the master access keys must be removed from the system and stored in a
secure place.

Securely store all copies of the encryption key. For example, any USB flash drives that are
holding an encryption key copy that are not left plugged into the system can be locked in a
safe. Similar precautions must be taken to protect any other copies of the encryption key that
are stored on other media.

Notes: Generally, create at least one extra copy on another USB flash drive for storage in
a secure location. You can also copy the encryption key from the USB drive and store the
data on other media, which can provide extra resilience and mitigate risk that the USB
drives used to store the encryption key come from a faulty batch.

Every encryption key copy must be stored securely to maintain confidentiality of the
encrypted data.

Chapter 12. Encryption 721


A minimum of one USB flash drive with the correct master access key is required to unlock
access to encrypted data after a system restart, such as a system-wide restart or power loss.
No USB flash drive is required during a warm restart, such as node-exiting service mode or a
single node restart. The data center power-on procedure must ensure that USB flash drives
containing encryption keys are plugged into the storage system before it is powered on.

During power-on, insert the USB flash drives into the USB ports on two supported canisters
to safeguard against failure of a node, node’s USB port, or USB flash drive during the
power-on procedure.

To enable encryption by using USB flash drives as the only encryption key provider, complete
the following steps:
1. In the Enable Encryption wizard Welcome tab, select USB flash drives and click Next, as
shown in Figure 12-24.

Figure 12-24 Selecting USB flash drives in the Enable Encryption wizard

722 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. If there are fewer than three USB flash drives that are inserted into the system, you are
prompted to insert more drives, as shown in Figure 12-25. The system reports how many
more drives must be inserted.

Figure 12-25 Waiting for USB flash drives to be inserted

Note: The Next option remains disabled until at least three USB flash drives are
detected.

3. Insert the USB flash drives into the USB ports as requested.

Chapter 12. Encryption 723


After the minimum required number of drives is detected, the encryption keys are
automatically copied on the USB flash drives, as shown in Figure 12-26.

Figure 12-26 Writing the master access key to USB flash drives

You can keep adding USB flash drives or replacing the drives that are plugged in to create
new copies. When done, click Next.

724 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4. The number of keys that were created is shown in the Summary tab, as shown in
Figure 12-27. Click Finish to finalize the encryption enablement.

Figure 12-27 Committing the encryption enablement

You receive a message confirming that the encryption is now enabled on the system, as
shown in Figure 12-28.

Figure 12-28 Encryption enabled message by using a USB flash drive

Chapter 12. Encryption 725


5. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption, as shown in Figure 12-29.

Figure 12-29 Encryption view that uses USB flash drives as the enabled provider

12.5.3 Enabling encryption by using key servers


A key server is a centralized system that receives and then distributes encryption keys to its
clients, including IBM Spectrum Virtualize systems.

IBM Spectrum Virtualize supports the following key servers as encryption key providers:
򐂰 IBM SKLM
򐂰 Gemalto SafeNet KeySecure

Note: Support for IBM SKLM was introduced in IBM Spectrum Virtualize V7.8. Support for
Gemalto SafeNet KeySecure was introduced in IBM Spectrum Virtualize V8.2.1.

IBM SKLM and SafeNet KeySecure support KMIP, which is a standard for the management of
cryptographic keys.

Note: Make sure that the key management server function is fully independent from the
encrypted storage that has encryption that is managed by this key server environment.
Failure to observe this requirement might create an encryption deadlock. An encryption
deadlock is a situation in which none of key servers in the environment can become
operational because some critical part of the data in each server is stored on a storage
system that depends on one of the key servers to unlock access to the data.

IBM Spectrum Virtualize V8.1 and later supports up to four key server objects that are defined
in parallel. But, only one key server type (IBM SKLM or KeySecure) can be enabled at one
time.

Another characteristic when working with key servers is that it is not possible to migrate from
one key server type directly to another. If you want to migrate from one type to another, you
first must migrate from your current key server to USB encryption, and then migrate from USB
to the other type of key server.

726 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Enabling encryption by using IBM SKLM
Before you create a key server object in the storage system, the key server must be
configured. Ensure that you complete the following tasks on the IBM SKLM server before you
enable encryption on the storage system:
򐂰 Configure the IBM SKLM server to use Transport Layer Security V1.2. The default setting
is TLSv1, but IBM Spectrum Virtualize supports only Version 1.2. So, set the value of
security protocols to SSL_TLSv2 (which is a set of protocols that includes TLS V1.2) in the
IBM SKLM server configuration properties.
򐂰 Ensure that the database service is started automatically on startup.
򐂰 Ensure that there is at least one Secure Sockets Layer (SSL) certificate for browser
access.
򐂰 Create a IBM Spectrum_VIRT device group for IBM Spectrum Virtualize systems.

For more information about completing these tasks, see IBM Knowledge Center.

Access to the key server that stores the correct master access key is required to enable
access to encrypted data in the system after a system restart. A system restart might be a
system-wide restart or power loss. Access to the key server is not required during a warm
restart, such as node-exiting service mode or a single node restart. The data center power-on
procedure must ensure key server availability before the storage system that uses encryption
starts. If a system with encrypted data restarts and does not have access to the encryption
keys, then the encrypted storage pools are offline until the encryption keys are detected.

To enable encryption by using an IBM SKLM key server, complete the following steps:
1. Ensure that service IP addresses are configured on all your nodes.
2. In the Enable Encryption wizard Welcome tab, select Key servers and click Next, as
shown in Figure 12-30.

Figure 12-30 Selecting the key server as the only provider in the Enable Encryption wizard

Chapter 12. Encryption 727


3. Select IBM SKLM (with KMIP) as the key server type, as shown in Figure 12-31.

Figure 12-31 Selecting IBM SKLM as the key server type

4. The wizard opens the Key Servers tab, as shown in Figure 12-32 on page 729. Enter the
name and Internet Protocol (IP) address of the key servers. The first key server that is
specified must be the primary IBM SKLM key server.

Note: The supported versions of IBM SKLM (up to Version 3.0, which was the latest
code version available at the time of writing) differentiate between the primary and
secondary key server role. The Primary IBM SKLM server as defined on the Key
Servers window of the Enable Encryption wizard must be the server that is defined as
the primary by IBM SKLM administrators.

The key server name serves only as a label. Only the provided IP address is used to
contact the server. If the key server’s TCP port number differs from the default value for
the KMIP protocol (that is, 5696), enter the port number. An example of a complete
primary IBM SKLM configuration is shown in Figure 12-32 on page 729.

728 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 12-32 Configuration of the primary IBM SKLM server

5. If you want to add secondary IBM SKLM servers, click the + symbol and enter the data for
secondary IBM SKLM servers, as shown on Figure 12-33. You can define up to four
IBM SKLM servers. Click Next when you are done.

Figure 12-33 Configuring multiple IBM SKLM servers

Chapter 12. Encryption 729


6. The next window in the wizard is a reminder that the IBM Spectrum_VIRT device group that
is dedicated for IBM Spectrum Virtualize systems must exist on the IBM SKLM key
servers. Make sure that this device group exists and click Next to continue, as shown in
Figure 12-34.

Figure 12-34 Checking the key server device group

7. Enable secure communication between the IBM Spectrum Virtualize system and the
IBM SKLM key servers by uploading the key server certificate from a trusted third-party
certificate authority (CA) or by using a self-signed certificate. The self-signed certificate
can be obtained from each of key servers directly.
After uploading any of the certificates in the window that are shown in Figure 12-35, click
Next.

Figure 12-35 Uploading the key-server or certificate authority SSL certificate

730 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
8. Configure the IBM SKLM key server to trust the public key certificate of the IBM Spectrum
Virtualize system. You can download the IBM Spectrum Virtualize system public SSL
certificate by clicking Export Public Key, as shown in Figure 12-36. Install this certificate
in the IBM SKLM key server in the IBM Spectrum_VIRT device group.

Figure 12-36 Downloading the IBM Spectrum Virtualize SSL certificate

9. When the IBM Spectrum Virtualize system public key certificate is installed on the
IBM SKLM key servers, acknowledge this installation by selecting the box that is indicated
in Figure 12-37 and click Next.

Figure 12-37 Acknowledging the IBM Spectrum Virtualize public key certificate transfer

Chapter 12. Encryption 731


[Link] key server configuration is shown in the Summary tab, as shown in Figure 12-38.
Click Finish to create the key server object and finalize the encryption enablement.

Figure 12-38 Finishing the enablement of encryption by using IBM SKLM key servers

[Link] no errors occur while the key server object is created, you receive a message that
confirms that the encryption is now enabled on the system, as shown in Figure 12-39.
Click Close.

Figure 12-39 Encryption enabled message that uses an IBM SKLM key server

732 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
[Link] that encryption is enabled by selecting Settings → Security → Encryption, as
shown in Figure 12-40. The Online state that indicates which IBM SKLM servers are
detected as available by the system.

Figure 12-40 Encryption that is enabled with only IBM SKLM servers as encryption key providers

Enabling encryption by using SafeNet KeySecure


IBM Spectrum Virtualize V8.2.1 introduced support for Gemalto SafeNet KeySecure, which is
a third-party key management server. It can be used as an alternative to IBM SKLM.

IBM Spectrum Virtualize supports Gemalto SafeNet KeySecure V8.3.0 and later, and uses
only the KMIP protocol. It is possible to configure up to four SafeNet KeySecure servers in
IBM Spectrum Virtualize for redundancy, and they can coexist with USB flash drive
encryption.

It is not possible to have both SafeNet KeySecure and IBM SKLM key servers that are
configured concurrently in IBM Spectrum Virtualize. It is also not possible to migrate directly
from one type of key server to another (from IBM SKLM to SafeNet KeySecure or vice versa).
If you want to migrate from one type to another, first migrate to USB flash drives encryption,
and then migrate to the other type of key servers.

KeySecure uses an active-active clustered model. All changes to one key server are instantly
propagated to all other servers in the cluster.

Although KeySecure uses the KMIP protocol like IBM SKLM does, an option is available to
configure the user name and password for IBM Spectrum Virtualize and KeySecure server
authentication, which is not possible when the configuration is performed with IBM SKLM.

The certificate for client authentication in SafeNet KeySecure can be self-signed or signed by
a CA.

Chapter 12. Encryption 733


To enable encryption in IBM Spectrum Virtualize by using a Gemalto SafeNet KeySecure key
server, complete the following steps:
1. Ensure that the service IP addresses are configured on all your nodes.
2. In the Enable Encryption wizard Welcome tab, select Key servers and click Next, as
shown in Figure 12-41.

Figure 12-41 Selecting key servers as the only provider in the Enable Encryption wizard

3. In the next window, you can choose between the IBM SKLM or Gemalto SefeNet
KeySecure server types, as shown in Figure 12-42. Select Gemalto SefeNet KeySecure
and click Next.

Figure 12-42 Selecting Gemalto SafeNet KeySecure as the key server type

734 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4. Add up to four SafeNet KeySecure servers in the next wizard window, as shown in
Figure 12-43. For each key server, enter the name, IP address, and TCP port for the KMIP
protocol (the default value is 5696). The server name is only a label, so it does not need to
be the real host name of the server.
Although Gemalto SafeNet KeySecure uses an active-active clustered model,
IBM Spectrum Virtualize asks for a primary key server. The primary key server represents
only the KeySecure server that is used for key create and rekey operations. Therefore, any
of the clustered key servers can be selected as the primary.
Selecting a primary key server is beneficial for load balancing. Any four key servers can be
used to retrieve the master key.

Figure 12-43 Configuring multiple SafeNet KeySecure servers

Chapter 12. Encryption 735


5. The next window in the wizard prompts for key servers credentials (user name and
password), as shown in Figure 12-44. This setting is optional because it depends on how
the SafeNet KeySecure servers are configured.

Figure 12-44 Key server credentials input (optional)

6. Enable secure communication between the IBM Spectrum Virtualize system and the
SafeNet KeySecure key servers by uploading the key server certificate from a trusted
third-party CA or by using a self-signed certificate. The self-signed certificate can be
obtained from each of key servers directly. After uploading any of the certificates in the
window that is shown in Figure 12-45, click Next.

Figure 12-45 Uploading the SafeNet KeySecure key servers certificate

736 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7. Configure the SafeNet KeySecure key servers to trust the public key certificate of the
IBM Spectrum Virtualize system. You can download the IBM Spectrum Virtualize system
public SSL certificate by clicking Export Public Key, as shown in Figure 12-45 on
page 736. After adding the public key certificate to the key servers, select the box and click
Next.

Figure 12-46 Downloading the IBM Spectrum Virtualize SSL certificate

8. The key server configuration is shown in the Summary tab, as shown in Figure 12-47.
Click Finish to create the key server object and finalize the encryption enablement.

Figure 12-47 Finishing the enablement of encryption by using SafeNet KeySecure key servers

Chapter 12. Encryption 737


9. If no errors occurred while creating the key server object, you receive a message that
confirms that the encryption is now enabled on the system, as shown in Figure 12-48.
Click Close.

Figure 12-48 Encryption that is enabled by using SafeNet KeySecure key servers

[Link] that encryption is enabled by selecting Settings → Security → Encryption, as


shown in Figure 12-49. Check whether the four servers are shown as online, which
indicates that all four SafeNet KeySecure servers are detected as available by the system.

Figure 12-49 Encryption that is enabled with four SafeNet KeySecure key servers

738 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.5.4 Enabling encryption by using both providers
IBM Spectrum Virtualize enables parallel use of both a USB flash drive and one type of key
server (IBM SKLM or SafeNet KeySecure) as encryption key providers. It is possible to
configure both providers in a single run of the Encryption Enable wizard. To perform this
configuration, the system must meet requirements of both key server (IBM SKLM of SafeNet
KeySecure) and USB flash drive encryption key providers.

Note: Make sure that the key management server function is fully independent from
encrypted storage that has encryption that is managed by this key server environment.
Failure to observe this requirement might create an encryption deadlock. An encryption
deadlock is a situation in which none of key servers in the environment can become
operational because some critical part of the data in each server is stored on an encrypted
storage system that depends on one of the key servers to unlock access to the data.

IBM Spectrum Virtualize V8.1 and later supports up to four key server objects that are defined
in parallel.

Before you enable encryption by using both USB flash drives and key servers, confirm the
requirements that are described in 12.5.2, “Enabling encryption by using USB flash drives” on
page 721 and 12.5.3, “Enabling encryption by using key servers” on page 726.

To enable encryption by using a key server and USB flash drive, complete the following steps:
1. Ensure that you have service IP addresses that are configured on all your nodes.
2. In the Enable Encryption wizard Welcome tab, select Key servers and USB flash drives
and click Next, as shown in Figure 12-50.

Figure 12-50 Selecting key servers and USB flash drives in the Enable Encryption wizard

Chapter 12. Encryption 739


3. The wizard opens the Key Server Types window, as shown in Figure 12-51. Select the key
server type that manages the encryption keys.

Figure 12-51 Selecting the key server type

The next windows that open are the same ones that are shown in 12.5.3, “Enabling
encryption by using key servers” on page 726, depending on the type of key server
selected.
When the key servers details are entered, the USB flash drive encryption configuration is
displayed. In this step, master encryption key copies are stored in the USB flash drives. If
fewer than three drives are detected, the system requests plugging in more USB flash
drives, as shown on Figure 12-52. You cannot proceed until the required minimum number
of USB flash drives is detected by the system.

Figure 12-52 Prompt to insert USB flash drives

740 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After at least three USB flash drives are detected, the system writes master access key to
each of the drives. The system attempts to write the encryption key to any flash drive it
detects. Therefore, it is crucial to maintain the physical security of the system during this
procedure. After the keys are successfully copied to at least three USB flash drives, the
system opens a window, as shown on Figure 12-53.

Figure 12-53 Master Access Key that is successfully copied to USB flash drives

4. After copying the encryption keys to USB flash drives, the next window opens and shows a
summary of the configuration that is implemented on the system (see Figure 12-54). Click
Finish to create the key server object and finalize the encryption enablement.

Figure 12-54 Encryption configuration summary in a two-provider scenario

Chapter 12. Encryption 741


5. If no errors occur while creating the key server object, the system displays a window that
confirms that the encryption is now enabled on the system and that both encryption key
providers are enabled (see Figure 12-55).

Figure 12-55 Encryption enabled message that uses both encryption key providers

6. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption, as shown in Figure 12-56 on page 743.
Note the state Online of key servers and state Validated of USB ports where USB flash
drives are inserted to make sure that they are properly configured.

742 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 12-56 Encryption that is enabled with both USB flash drives and key servers

12.6 Configuring more providers


After the system is configured with a single encryption key provider, a second provider can be
added.

Note: If you set up encryption of your storage system when it was running a version of
IBM Spectrum Virtualize earlier than Version 7.8.0, you must rekey the master encryption
key before you can enable a second encryption provider when you upgrade to Version 8.1
or later.

Chapter 12. Encryption 743


12.6.1 Adding key servers as a second provider
If the storage system is configured with the USB flash drive provider, it is possible to configure
IBM SKLM or SafeNet KeySecure servers as a second provider. To enable key servers as a
second provider, complete the following steps:
1. Select Settings → Security → Encryption. Expand the Key Servers section and click
Configure, as shown in Figure 12-57. To enable a key server as a second provider, the
system must detect at least one USB flash drive with a current copy of the master access
key.

Figure 12-57 Enabling key servers as a second provider

2. Complete the steps that are required to configure the key server provider, as described in
12.5.3, “Enabling encryption by using key servers” on page 726. The difference in the
process that is described in that section is that the wizard gives you an option to disable
USB flash drive encryption, which aims to migrate from the USB flash drive to key server
provider.
Select No to enable both encryption key providers, as shown in Figure 12-58 on page 745.

744 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 12-58 Do not disable the USB flash drive encryption key provider

This choice is confirmed on the summary window before the configuration is committed,
as shown in Figure 12-59.

Figure 12-59 Configuration summary before committing

Chapter 12. Encryption 745


3. After you click Finish, the system configures the keys servers as a second encryption key
provider. Successful completion of the task is confirmed by a message, as shown in
Figure 12-60. Click Close.

Figure 12-60 Confirmation of successful configuration of two encryption key providers

4. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption, as shown in Figure 12-61. Note the Online
state of key servers and Validated state of USB ports where USB flash drives are
inserted to make sure that they are properly configured.

Figure 12-61 Encryption that is enabled with two key providers available

746 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.6.2 Adding USB flash drives as a second provider
If the storage system is configured with an IBM SKLM or SafeNet KeySecure encryption key
provider, it is possible to configure USB flash drives as a second provider. To enable USB
flash drives as a second provider, complete the following steps:
1. Select Settings → Security → Encryption. Expand the USB Flash Drives section and
click Configure, as shown in Figure 12-62. To enable USB flash drives as a second
provider, the system must access key servers with the current master access key.

Figure 12-62 Enabling USB flash drives as a second encryption key provider

2. After you click Configure, you see a wizard like the one that is described in 12.5.2,
“Enabling encryption by using USB flash drives” on page 721. You cannot disable key
server providers during this process.
After successful completion of the process, you are presented with a message confirming
that both encryption key providers are enabled, as shown in Figure 12-63.

Figure 12-63 Confirmation of successful configuration of two encryption key providers

Chapter 12. Encryption 747


3. You can confirm that encryption is enabled and verify which key providers are in use by
selecting Settings → Security → Encryption, as shown in Figure 12-64. Note the state
Online state of key servers and Validated state of USB ports where USB flash drives are
inserted to make sure that they are properly configured.

Figure 12-64 Encryption that is enabled with two key providers available

12.7 Migrating between providers


IBM Spectrum Virtualize V8.1 introduced support for simultaneous use of both USB flash
drives and a key server as encryption key providers. The system also enables migration of a
configuration by using only a USB flash drive provider to key servers provider path, and vice
versa.

If you want to migrate from one key server type to another (for example, migrating from
IBM SKLM to SafeNet KeySecure or vice versa), direct migration is not possible. In this case,
you first must migrate from the current key server type to a USB flash drive, and then migrate
to the other type of key server.

748 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.7.1 Changing from a USB flash drive provider to an encryption key server
The system is designed to facilitate changing from a USB flash drives encryption key provider
to an encryption key server provider. If you follow the steps that are described in 12.6.1,
“Adding key servers as a second provider” on page 744, but when completing step 2 on
page 744 you select Yes instead of No (see Figure 12-65). This action causes de-activation
of the USB flash drives provider, and the procedure completes with only key servers that are
configured as a key provider.

Figure 12-65 Disabling the USB flash drive provider while changing to the IBM SKLM provider

12.7.2 Changing from an encryption key server to a USB flash drive provider
Changing from using encryption key servers provider to a USB flash drives provider is not
possible by using only the GUI.

To change the direction, add USB flash drives as a second provider by completing the steps
that are described in 12.6.2, “Adding USB flash drives as a second provider” on page 747.

Then, run the following command in the CLI:


chencryption -usb validate

To make sure that the USB drives contain the correct master access key, disable the
encryption key server provider by running the following command:
chencryption -keyserver disable

This command disables the encryption key server provider, which effectively migrates your
system from an encryption key server to a USB flash drive provider.

Chapter 12. Encryption 749


12.7.3 Migrating between different key server types
The migration between different key server types cannot be performed directly from one type
of key server to another. USB flash drives encryption must be used to facilitate this task.

If you want to migrate from one type of key server to another, you first must migrate from your
current key servers to USB encryption, and then migrate from USB to the other type of key
servers.

The procedure to migrate from one key server type to another is shown here. In this example,
we migrate an IBM Spectrum Virtualize system that is configured with IBM SKLM key servers
(as shown in Figure 12-66) to SafeNet KeySecure servers.

Figure 12-66 IBM Spectrum Virtualize encryption that is configured with IBM SKLM servers

To migrate to Gemalto SafeNet KeySecure, complete the following steps:


1. Migrate from key server encryption to USB flash drives encryption, as described in 12.7.2,
“Changing from an encryption key server to a USB flash drive provider” on page 749. After
this step, only USB flash drives encryption is configured, as shown in Figure 12-67 on
page 751.

750 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 12-67 IBM FlashSystem encryption that is configured with USB flash drives

2. Migrate from USB flash drives encryption to another key server type encryption (in this
example, Gemalto SafeNet KeySecure) by following the steps that are described in 12.7.1,
“Changing from a USB flash drive provider to an encryption key server” on page 749. After
completing this step, the other key server type is configured as an encryption provider in
IBM Spectrum Virtualize, as shown in Figure 12-68.

Figure 12-68 IBM FlashSystem encryption that is configured with SafeNet KeySecure

Chapter 12. Encryption 751


12.8 Recovering from a provider loss
If both encryption key providers are enabled and you lose one of them (by losing all copies of
the encryption key that are kept on the USB flash drives or by losing all IBM SKLM servers),
you can recover from this situation by disabling the provider to which you lost the access. To
disable the unavailable provider, you must have access to a valid master access key on the
remaining provider.

If you lose access to the encryption key server provider, run the following command:
chencryption -keyserver disable

If you lose access to the USB flash drives provider, run the following command:
chencryption -usb disable

If you want to restore the configuration with both encryption key providers, follow the
instructions that are described in 12.6, “Configuring more providers” on page 743.

Note: If you lose access to all encryption key providers that are defined in the system, no
method is available to recover access to the data that is protected by the master access
key.

12.9 Using encryption


The design for encryption is based on the concept that a system is fully encrypted or not
encrypted. Encryption implementation is intended to encourage solutions that contain only
encrypted volumes or only unencrypted volumes. For example, after encryption is enabled on
the system, all new objects (for example, pools) are by default created as encrypted.

Some unsupported configurations are actively policed in code. For example, no support exists
for creating unencrypted child pools from encrypted parent pools. However, exceptions exist:
򐂰 During the migration of volumes from unencrypted to encrypted volumes, a system might
report both encrypted and unencrypted volumes.
򐂰 It is possible to create unencrypted arrays from CLI by manually overriding the default
encryption setting.

Notes: Encryption support for distributed redundant array of independent disks (DRAID) is
available in IBM Spectrum Virtualize V7.7 and later.

You must decide whether to encrypt or not encrypt an object when it is created. You cannot
change this setting later. To change the encryption state of stored data, you must migrate
from an encrypted object (for example, a pool) to an unencrypted one, or vice versa.
Volume migration is the only way to encrypt any volumes that were created before enabling
encryption on the system.

752 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.9.1 Encrypted pools
For more information about how to open the Create Pool window, see Chapter 5, “Storage
pools” on page 221. After encryption is enabled, any new pool is created by default as
encrypted, as shown in Figure 12-69.

Figure 12-69 Create Pool window basic

You can click Create to create an encrypted pool. All storage that is added to this pool is
encrypted.

You can customize the Pools view in the management GUI to show the pool encryption
status. Select Pools → Pools, and then select Actions → Customize Columns →
Encryption, as shown in Figure 12-70.

Figure 12-70 Pool encryption state

If you create an unencrypted pool but you add only encrypted arrays or self-encrypting
MDisks to the pool, the pool is reported as encrypted because all extents in the pool are
encrypted. The pool reverts to the unencrypted state if you add an unencrypted array or
MDisk.

Chapter 12. Encryption 753


How to add encrypted storage to encrypted pools is described next. You can mix and match
storage encryption types in a pool. Figure 12-71 shows an example of an encrypted pool that
contains storage by using different encryption methods.

Figure 12-71 Mixing and matching encryption in a pool

12.9.2 Encrypted child pools


For more information about how to open the Create Child Pool window, see Chapter 5,
“Storage pools” on page 221. If the parent pool is encrypted, every child pool also must be
encrypted. The GUI enforces this requirement by automatically selecting Encryption
Enabled in the Create Child Pool window and preventing changes to this setting, as shown in
Figure 12-72.

Figure 12-72 Creating a child pool of an encrypted parent pool

754 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
However, if you want to create encrypted child pools from an unencrypted storage pool that
contains a mix of internal arrays and external MDisks, the following restrictions apply:
򐂰 The parent pool must not contain any unencrypted internal arrays. If any unencrypted
internal arrays are in the unencrypted pool, when you try to create a child pool and select
the option to set it as encrypted, it is created as unencrypted.
򐂰 All IBM FlashSystem Control Enclosures in the system must support software encryption
and have the encryption license activated.

Note: An encrypted child pool that is created from an unencrypted parent storage pool
reports as unencrypted if the parent pool contains any unencrypted internal arrays.
Remove these arrays to ensure that the child pool is fully encrypted.

If you modify Pools view, you see the encryption status of child pools, as shown in
Figure 12-73. The example shows an encrypted child pool with a non-encrypted parent pool.

Figure 12-73 Child pool encryption state

12.9.3 Encrypted arrays


For more information about how to add internal storage to a pool, see Chapter 5, “Storage
pools” on page 221. After encryption is enabled, all newly built arrays are hardware encrypted
by default. In this case, the GUI does not allow you to create an unencrypted array. To create
an unencrypted array, the command-line interface (CLI) must be used. Example 12-1 shows
how to create an unencrypted array by using the CLI.

Example 12-1 Creating an unencrypted array by using the CLI with IBM FlashSystem
IBM_SAN:ITSO-V7k:superuser>svctask mkarray -drive 6:4 -level RAID 1 -sparegoal 0
-strip 256 -encrypt no Pool2
MDisk, id [2], successfully created
IBM_SAN:ITSO-V7k:superuser>

Note: It is not possible to add unencrypted arrays to an encrypted pool.

Chapter 12. Encryption 755


You can customize the MDisks by Pools view to show the array encryption status. Select
Pools → MDisk by Pools, and then select Actions → Customize Columns → Encryption.
You also can right-click the table header to customize columns and select Encryption, as
shown in Figure 12-74.

Figure 12-74 Array encryption state

You can also check the encryption state of an array by reviewing its drives by selecting
Pools → Internal Storage. The internal drives that are associated with an encrypted array
are assigned an encrypted property that you can view, as shown in Figure 12-75.

Figure 12-75 Drive encryption state

12.9.4 Encrypted MDisks


For more information about how to add external storage to a pool, see Chapter 5, “Storage
pools” on page 221. Each MDisk that belongs to external storage that is added to an
encrypted pool or child pool is automatically encrypted by using the pool or child pool key
unless the MDisk is detected or declared as self-encrypting.

The user interface gives no method to see which extents contain encrypted data and which
do not. However, if a volume is created in a correctly configured encrypted pool, all data that
is written to this volume is encrypted.

756 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can use the MDisk by Pools view to view the object encryption state by selecting
Pools → MDisk by Pools. Figure 12-76 shows an example in which a self-encrypting MDisk
is in an unencrypted pool.

Figure 12-76 MDisk encryption state

When working with MDisk encryption, take extra care when configuring the MDisks and
pools.

If the MDisk was earlier used for storage of unencrypted data, the extents can contain stale
unencrypted data. This issue occurs because file deletion marks disk space only as free. The
data is not removed from the storage. Therefore, if the MDisk is not self-encrypting and was a
part of an unencrypted pool, and later was moved to an encrypted pool, it contains stale data
from its previous state.

Another mistake that can occur is misconfiguring an external MDisk as self-encrypting while
in reality it is not self-encrypting. In that case, the data that is written to this MDisk is not
encrypted by IBM FlashSystem because IBM FlashSystem expects that the storage system
hosting the MDisk encrypts the data. Concurrently, the MDisk does not encrypt the data
because it is not self-encrypting, so the system ends up with unencrypted data on an extent in
an encrypted storage pool.

However, all data that is written to any MDisk that is a part of correctly configured encrypted
storage pool is going to be encrypted.

Self-encrypting MDisks
When adding external storage to a pool, be exceptionally diligent when declaring the MDisk
as self-encrypting. Correctly declaring an MDisk as self-encrypting avoids wasting resources,
such as CPU time. However, when used improperly, it might lead to unencrypted data at-rest.

Chapter 12. Encryption 757


To declare an MDisk as self-encrypting, select Externally encrypted when adding external
storage in the Assign Storage view, as shown in Figure 12-77.

Figure 12-77 Declaring an MDisk as externally encrypted

IBM Spectrum Virtualize products can detect that an MDisk is self-encrypting by using the
SCSI Inquiry page C2. MDisks that are provided by other IBM Spectrum Virtualize products
report this page correctly. For these MDisks, the Externally encrypted option that is shown in
Figure 12-77 is not selected. However, when added, they are still considered as
self-encrypting.

Note: You can override the external encryption setting of a detected MDisk as
self-encrypting and configure it as unencrypted by running the chmdisk -encrypt no
command. However, do so only if you plan to decrypt the data on the back end or if the
back end uses inadequate data encryption.

758 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To check whether an MDisk was detected or declared as self-encrypting, select Pools →
MDisk by Pools and verify the information in the Encryption column, as shown in
Figure 12-78.

Figure 12-78 MDisk self-encryption state

The value that is shown in the Encryption column shows the property of objects in respective
rows, which means that in the configuration that is shown in Figure 12-78, Pool1 is encrypted,
so every volume that is created from this pool is encrypted. However, that pool is formed by
two MDisks, out of which one is self-encrypting and one is not. Therefore, a value of No next to
mdisk2 does not imply that the encryption of Pool1 is in any way compromised. It indicates
that encryption of the data that is placed on mdisk2 is done only by using software encryption.
Data that is placed on mdisk1 is encrypted by the back-end storage that is providing these
MDisks.

Note: You can change the self-encrypting attribute of an MDisk that is unmanaged or a
member of an unencrypted pool. However, you cannot change the self-encrypting attribute
of an MDisk after it is added to an encrypted pool.

12.9.5 Encrypted volumes


For more information about how to create and manage volumes, see Chapter 6, “Volumes” on
page 277. The encryption status of a volume depends on the pool encryption status. Volumes
that are created in an encrypted pool are automatically encrypted.

You can modify the Volumes view to show whether the volume is encrypted. Select
Volumes → Volumes, and then select Actions → Customize Columns → Encryption to
customize the view to show the volume’s encryption status, as shown in Figure 12-79.

Figure 12-79 Volume view customization

Chapter 12. Encryption 759


A volume is reported as encrypted only if all the volume copies are encrypted, as shown in
Figure 12-80.

Figure 12-80 Volume encryption status depending on volume copies encryption

When creating volumes, make sure to select encrypted pools to create encrypted volumes, as
shown in Figure 12-81.

Figure 12-81 Creating an encrypted volume by selecting an encrypted pool

You cannot change an existing unencrypted volume to an encrypted version of itself


dynamically. However, this conversion is possible by using two migration options:
򐂰 Migrate a volume to an encrypted pool or child pool.
򐂰 Mirror a volume to an encrypted pool or child pool and delete the unencrypted copy.

For more information about these methods, see Chapter 6, “Volumes” on page 277.

760 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.9.6 Restrictions
The following restrictions apply to encryption:
򐂰 Image mode volumes cannot be in encrypted pools.
򐂰 You cannot add external non-self-encrypting MDisks to encrypted pools unless all control
enclosures in the system support encryption.

12.10 Rekeying an encryption-enabled system


Changing the master access key is a security requirement. Rekeying is the process of
replacing the current master access key with a newly generated one. The rekey operation
works whether encrypted objects exist. The rekeying operation requires access to a valid
copy of the original master access key on an encryption key provider that you plan to rekey.
Use the rekey operation according to the schedule that is defined in your organization’s
security policy and whenever you suspect that the key might be compromised.

If you have both USB and key servers that are enabled, rekeying is done separately for each
of the providers.

Important: Before you create a master access key, ensure that all nodes are online and
that the current master access key is accessible.

No method is available to directly change data encryption keys. If you must change the data
encryption key that is used to encrypt data, the only available method is to migrate that data
to a new encrypted object (for example, an encrypted child pool). Because the data
encryption keys are defined per encrypted object, such migration forces a change of the key
that is used to encrypt that data.

Chapter 12. Encryption 761


12.10.1 Rekeying by using a key server
Ensure that all the configured key servers can be reached by the system and that service IP
addresses are configured on all your nodes.

To rekey the master access key that is kept on the key server provider, complete the following
steps:
1. Select Settings → Security → Encryption. Ensure that Encryption Keys shows that all
configured IBM SKLM servers are reported as Accessible, as shown in Figure 12-82.
Click Key Servers to expand the section.

Figure 12-82 Locate Key Servers section in the Encryption window

2. Click Rekey, as shown in Figure 12-83.

Figure 12-83 Starting the rekey on the IBM SKLM key server

762 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. Click Yes in the next window to confirm the rekey operation, as shown in Figure 12-84.

Figure 12-84 Confirming the key server rekey operation

Note: The rekey operation is performed on only the primary key server that is
configured in the system. If more key servers are configured apart from the primary key,
they do not hold the updated encryption key until they obtain it from the primary key
server. To restore encryption key provider redundancy after a rekey operation, replicate
the encryption key from the primary key server to the secondary key servers.

You receive a message confirming that the rekey operation was successful, as shown in
Figure 12-85.

Figure 12-85 Successful key server rekey operation

Chapter 12. Encryption 763


12.10.2 Rekeying by using USB flash drives
During the rekey process, new keys are generated and copied to the USB flash drives. These
keys are then used instead of the current keys. The rekey operation fails if at least one of the
USB flash drives does not contain the current key. To rekey the system, you need at least
three USB flash drives to store the master access key copies.

After the rekey operation is complete, update all other copies of the encryption key, including
copies that are stored on other media. Take the same precautions to securely store all copies
of the new encryption key as when you enabled encryption for the first time.

To rekey the master access key on USB flash drives, complete the following steps:
1. Select Settings → Security → Encryption. Click USB Flash Drives to expand the
section, as shown in Figure 12-86.

Figure 12-86 Finding the USB Flash Drive section in the Encryption view

764 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Verify that all USB drives that are plugged into the system are detected and show as
Validated, as shown in Figure 12-87. Click Rekey. You need at least three USB flash
drives, with at least one reported as Validated to process a rekey.

Figure 12-87 Starting a rekey on a USB flash drives provider

Chapter 12. Encryption 765


3. If the system detects a validated USB flash drive and at least three available USB flash
drives, new encryption keys are automatically copied on the USB flash drives, as shown in
Figure 12-88. Click Commit to finalize the rekey operation.

Figure 12-88 Writing new keys to USB flash drives

4. You receive a message confirming that the rekey operation was successful, as shown in
Figure 12-89. Click Close.

Figure 12-89 Successful rekey operation by using USB flash drives

766 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
12.11 Disabling encryption
You are prevented from disabling encryption if any encrypted objects are defined apart from
self-encrypting MDisks. You can disable encryption in the same way whether you use USB
flash drives, a key server, or both providers.

To disable encryption, complete the following steps:


1. Select Settings → Security → Encryption, and click Enabled. If no encrypted objects
exist, a menu is displayed. Click Disabled to disable encryption on the system.
Figure 12-90 shows an example for a system with both encryption key providers
configured.

Figure 12-90 Disabling encryption on a system with both providers

2. You receive a message confirming that encryption was disabled. Figure 12-91 shows the
message when a key server is used.

Figure 12-91 Encryption disabled

Chapter 12. Encryption 767


768 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13

Chapter 13. Reliability, availability, and


serviceability, monitoring and
logging, and troubleshooting
This chapter introduces useful and common procedures to maintain the system. It includes
the following topics:
򐂰 13.1, “Reliability, availability, and serviceability” on page 770
򐂰 13.2, “Shutting down the IBM FlashSystem” on page 781
򐂰 13.3, “Removing or adding a node from or to the system” on page 782
򐂰 13.4, “Configuration backup” on page 785
򐂰 13.5, “Software update” on page 789
򐂰 13.6, “Health checker feature” on page 804
򐂰 13.7, “Troubleshooting and fix procedures” on page 805
򐂰 13.8, “Monitoring” on page 812
򐂰 13.9, “Audit log” on page 829
򐂰 13.10, “Collecting support information by using the GUI, CLI, and USB” on page 831
򐂰 13.11, “Service Assistant Tool” on page 838
򐂰 13.12, “IBM Storage Insights monitoring” on page 841

© Copyright IBM Corp. 2020. All rights reserved. 769


13.1 Reliability, availability, and serviceability
Reliability, availability, and serviceability (RAS) are important concepts in the design of the
IBM Spectrum Virtualize system. Hardware features, software features, design
considerations, and operational guidelines all contribute to make the IBM Storwize and IBM
FlashSystem systems reliable.

Fault tolerance and high levels of availability are achieved by using the following methods:
򐂰 The distributed redundant array of independent disks (DRAID) capabilities of the
underlying disks.
򐂰 IBM FlashSystem nodes clustering that use a Compass architecture.
򐂰 Auto-restart of hung nodes.
򐂰 Integrated battery backup units (BBUs) to provide memory protection if a site power failure
occurs.
򐂰 Host system failover capabilities by using N_Port ID Virtualization (NPIV).
򐂰 Deploying advanced multi-site configurations, such as IBM HyperSwap and stretched
clusters.

High levels of serviceability are available through the following methods:


򐂰 Cluster error logging
򐂰 Asynchronous error notification
򐂰 Automatic dump capabilities to capture software-detected issues
򐂰 Concurrent diagnostic procedures
򐂰 Directed maintenance procedures (DMPs) with guided online replacement processes
򐂰 Concurrent log analysis and memory dump data recovery tools
򐂰 Concurrent maintenance of IBM FlashSystem components
򐂰 Concurrent upgrade of IBM Spectrum Virtualize Software and firmware
򐂰 Concurrent addition or deletion of node canisters in the clustered system
򐂰 Automatic software version leveling when replacing a node
򐂰 Detailed status and error conditions that are displayed by light-emitting diode (LED)
indicators
򐂰 Error and event notification through Simple Network Management Protocol (SNMP),
syslog, and email
򐂰 Optional Remote Support Assistant

The heart of the IBM FlashSystem system is a pair of node canisters. These two canisters
share the read and write data workload from the attached hosts and to the disk arrays. This
section examines the RAS features of the systems, monitoring, and troubleshooting.

770 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.1.1 Node canisters
Two node canisters are contained in the control enclosure that work as a clustered system
that runs the IBM Spectrum Virtualize Software. As shown in Figure 13-1, the top node
canister is inverted above the bottom one. The control enclosure also contains two power
supply units (PSUs) that operate independently of each other. The PSUs are visible from the
back of the control enclosure.

Rear Front

PSU LED

Canister LEDs Battery LEDs Front Panel LEDs


- green left: power - green: status
- green horse-shoe: status - amber: fault
- amber: fault
Drive LEDs

Figure 13-1 LEDs on each node canister

The connections of a single node canister (bottom) are shown in Figure 13-2.

Figure 13-2 Typical node canister connections

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 771
Host interface cards
Each canister (apart from the V5100) has three host interface card (HIC) slots. Depending on
the system, there might already be a 4-port serial-attached Small Computer System Interface
(SCSI) (SAS) card that is installed in each node, leaving two HIC slots that can be populated
with a range of cards, as shown in Table 13-1. Nodes in the same I/O group must have the
same HIC configuration.

Table 13-1 Supported card configurations for V7xxx / FlashSystem 9xxx systems
Supported Ports Protocol Slot positions Note
number of
cards

0-3 4 16 Gb Fibre 1, 2, 3
Channel (FC)

0-3 4 32 Gb FC 1, 2, 3 Works with a 16 Gb switch.

0-3 2 25 Gb Ethernet 1, 2, 3
(internet Wide
Area Remote
Direct Memory
Access (RDMA)
Protocol
(iWARP))

0-3 2 25 Gb Ethernet 1, 2, 3
(RDMA over
Converged
Ethernet (RoCE))

0-1 2 12 Gb SAS 1 򐂰 Four-port card, but only


Expansion two are active.
򐂰 Expansion only, no SAS
host attachment.

Note: The systems have onboard compression cards. There are no compression assist
cards like in previous models.

For V5100, there are only two card slots, and the following card configurations are supported
(Table 13-2).

Table 13-2 Supported card configurations for V5100 / FlashSystem 5xxx systems
Supported Ports Protocol Slot positions Note
number of
cards

0-1 4 16 Gb FC 2

0-1 4 32 Gb FC 2 Works with a 16 Gb switch.

0-1 2 25 Gb Ethernet 2
(iWARP)

772 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Supported Ports Protocol Slot positions Note
number of
cards

0-1 2 25 Gb Ethernet 2
(RoCE)

0-1 2 12 Gb SAS 1 򐂰 Four-port card, but only


Expansion two are active.
򐂰 Expansion only, no SAS
host attachment.

Note: For V5100 and FlashSystem 5100 systems, PCIe slot 1 has a blanking plate, so this
slot cannot be used, and slots 2 and 3 become slots 1 and 2. The fabric attach card goes
only in slot 2 (far right when the canister is in the lower canister) so that you can better
leverage the direct connection of slot 2 to the CPU. Slot 1 (middle position) is connected
through the PCIe switch and accepts only the optional (and slower) SAS card.

The FC card is required to add other control enclosures to the system (0 - 2). Using an FC
card, you can connect the FlashSystem 9xxx or V7xxx control enclosure to up to three more
systems (for a maximum of eight nodes). For the V5100 system, you can connect only one
extra control enclosure (for a maximum of four nodes). For FC configurations, the meaning of
the port LEDs is explained in Table 13-3.

Table 13-3 Fibre Channel link LED statuses


Port LED Color Meaning

Link status Green Link is up, connection established.

Speed Amber Link is not up or speed fault.

USB ports
Two active USB connectors are available in the horizontal position to the right of the node.
They have no numbers, and no indicators are associated with them. These ports can be used
for initial cluster setup, encryption key backup, and node status or log collection.

Ethernet and LED status


Five 10-Gigabit Ethernet (GbE) ports are on each canister. However, not all ports are equal,
and their function can be found in Table 13-4. Figure 13-2 on page 771 shows the location of
the technician port on a node canister.

Table 13-4 Ethernet ports and their functions


Onboard Ethernet port Speed Function

1 10 GbE Management Internet Protocol


(IP), Service IP, and Host I/O
(internet Small Computer
Systems Interface (iSCS only)

2 10 GbE Secondary Management IP,


and Host I/O (iSCSI only)

3 10 GbE Host I/O (iSCSI only)

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 773
Onboard Ethernet port Speed Function

4 10 GbE Host I/O (iSCSI only)

T 1 GbE Technician Port: DHCP /


domain name server (DNS) for
direct attach service
management

Each port has two LEDs, and their status values are listed in Table 13-5. However, the T port
is strictly dedicated to technician actions (initial and emergency configuration by local support
personnel).

Table 13-5 Ethernet LED statuses


LED Color Meaning

Link state Green It is on when there is an Ethernet link.

Activity Amber It is flashing when there is activity on the link.

Serial-attached SCSI ports


When a 4-port SAS interface card is installed, it is possible to connect the 2U and 4U
expansion enclosures. However, only ports 1 and 3 are used for SAS connections, with the
SAS chain from port 1 installed below the lower node canister, and the SAS chain from port 3
installed above the upper node canister, as shown in Table 13-6. The SAS card must be
installed in PCIe slot 3 of each node canister. There are two LEDs for each SAS port with
statuses, as shown in Table 13-6.

Table 13-6 SAS LED statuses


LED Meaning

Green Link is connected and up.

Orange Fault on the SAS link (disconnected, wrong speed, and errors).

Node canister status LEDs


There are three LEDs in a row at the left of the canister that indicates the status and the
functions of the node (see Table 13-7).

Table 13-7 Node canister LEDs


Position Color Name State Meaning

Left Green Power On The node is started and active. It might not be safe to
remove the canister. If the fault LED is off, the node is an
active member of a cluster or candidate. If the fault LED
is also on, the node is in the service state or in error,
which prevents the software to start.

Flashing Canister is started and in standby mode.


(2 Hz)

Flashing Node is running a power-on self-test (POST).


(4 Hz)

Off No power to the canister or it is running on battery.

774 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Position Color Name State Meaning

Middle Green Status On The node is a member of a cluster.

Flashing The node is a candidate for or in a service state.


(2 Hz)

Flashing The node is performing a fire hose dump. Never unplug


(4 Hz) the canister during this time.

Off No power to the canister or the canister is in standby


mode.

Right Amber Fault On The canister is in a service state or in error, for example,
a POST error that is preventing the software from starting.

Flashing Canister is being identified.


(2 Hz)

Off Node is either in the candidate or active state.

Battery LEDs
Immediately to the right of the canister LEDs, with a short gap between them, are the Battery
LEDs, which provide the status of the battery (see Table 13-8).

Table 13-8 Battery LEDs


Position Color Name State Meaning

Left Green Status On Indicates that the battery is fully charged and has
sufficient charge to complete two fire hose dumps.

Flashing Indicates that the battery has sufficient charge to


(2 Hz) complete a single fire hose dump.

Flashing Indicates that the battery is charging and has


(4 Hz) insufficient charge to complete a single fire hose dump.

Off Indicates that the battery is not available for use (for
example, it is missing or contains a fault).

Right Amber Fault On Indicates that a battery has a fault or a condition


occurred. The node enters the service state.

Off Indicates that there are no known battery faults or


conditions. An exception is when a battery has
insufficient charge to complete a single fire hose dump.
Refer to the Status LED.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 775
13.1.2 Expansion canisters
As Figure 13-3 shows, two 12 gigabits per second (Gbps) SAS ports are side by side on the
canister of every enclosure. They are numbered 1 on the left and 2 on the right. Like the
controller canisters, expansion canisters are also installed in the enclosure side by side in a
vertical position.

SAS Indicators Canister LEDs


- green: link - green left: power
- green horse-shoe: status
- amber: fault
- amber: fault

Figure 13-3 Expansion canister status LEDs

The interpretation of the SAS status LED indicators has the same meaning as the LED
indicators of SAS ports in the control enclosure (Table 13-6 on page 774).

Table 13-9 lists the LED status values of the expansion canister.

Table 13-9 Expansion canister LEDs statuses


Position Color Name State Meaning

Left Green Power On The canister is powered on.

Off No power is available to the canister.

Middle Green Status On The canister is operating normally.

Flashing There is an error with the vital product data


(VPD).

Right Amber Fault On There is an error that is logged against the


canister or the system is not running.

Flashing Canister is being identified.

Off No fault, canister is operating normally.

13.1.3 Dense Drawer Enclosures LED


As Figure 13-4 on page 777 shows, two 12 Gbps SAS ports are side by side on the canister
of every enclosure. They are numbered 1 on the right and 2 on the left. Each Dense Drawer
has two canisters side by side, although they are inverted when compared to the 2U
enclosures.

776 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-4 Dense Drawer LEDs

The interpretation of SAS status LED indicators has the same meaning as the LED indicators
of SAS ports that are mentioned in the previous section (see Table 13-9 on page 776).

Table 13-10 shows the LED status values of the expansion canister.

Table 13-10 Expansion canister LEDs statuses


Position Color Name State Meaning

Right Green Power On The canister is powered on.

Off No power is available to the canister.

Middle Green Status On The canister is operating normally.

Flashing There is an error with the VPD.

Left Amber Fault On There is an error that is logged against the


canister or the system is not running (OSES).

Flashing Canister is being identified.

Off No fault, canister is operating normally.

13.1.4 Enclosure SAS cabling


Expansion enclosures are attached to control enclosures through 12 Gbps SAS cables. The
IBM FlashSystem control enclosure attaches up to 20 expansion enclosures or up to eight
Dense Drawer enclosures.

A strand starts with an SAS initiator chip inside a IBM FlashSystem node canister and
progresses through SAS expanders, which connect disk drives. Each canister contains an
expander. Each drive has two ports, each connected to a different expander and strand. This
configuration ensures that both nodes in the input/output (I/O) group have direct access to
each drive, and that is no single point of failure (SPOF) exists.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 777
Figure 13-5 shows how the SAS connectivity works inside the node and expansion canisters.

Figure 13-5 Concept of SAS chaining

Note: The last expansion enclosure in a chain must not have cables in port 2 of canister 1
or port 2 of canister 2. So, if you add another two enclosures to the setup that is shown in
Figure 13-5, you connect a cable to port 2 of the existing enclosure canisters and port 1 of
the new enclosure canisters.

778 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A chain consists of a set of enclosures that are correctly interconnected (Figure 13-6).
Chain 1 of an I/O group is connected to SAS port 1 of both node canisters. Chain 2 is
connected to SAS port 3. This configuration means that chain 2 includes the SAS expander
and drives of the control enclosure.

Figure 13-6 SAS cabling with numbered enclosures

At system initialization, when devices are added to or removed from strands, the system
performs a discovery process to update the state of the drive and enclosure objects.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 779
13.1.5 IBM FlashCore Module drives
There are two main considerations for RAS when talking about the new IBM FlashCore
Module (FCM) drives:
򐂰 Never reseat an FCM. because when it is reseated, it performs an automatic reformat,
which means all data on that drive can be lost.
򐂰 When removing an array, FCM drives might show as offline for some time due to
formatting. They automatically come back online alter the task finishes.

13.1.6 Power
All enclosures accommodate two PSUs for normal operation. A single PSU can supply the
entire enclosure for redundancy. For this reason, it is highly advised to supply AC power to
each PSU from different Power Distribution Units (PDUs).

There is a power switch on the power supply and indicator LEDs. The switch must be on for
the PSU to be operational. If the power switch is turned off, the PSU stops providing power to
the system.

For control enclosure PSUs, the battery that is integrated in the node canister continues to
supply power to the node. It supports the power outage for 5 seconds before initiating safety
procedures. A fully charged battery can perform two fire hose dumps. A fire hose dump is a
process where a node stores cache and system data to an internal drive in the event of a
power failure.

Figure 13-7 shows two PSUs that are present in the control and expansion enclosure. The
controller PSU has one LED that can be green or amber, depending on the status of the PSU.
If the LED is off, that means there is no AC power to the entire enclosure.

PSU1 PSU2

Figure 13-7 Controller and expansion enclosure LED status indicator

Figure 13-8 presents the rear overview of the enclosure canister with a PSU. The enclosure is
powered on by the direct attachment of a power cable.

PSU

Figure 13-8 Expansion enclosure power supply unit

780 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Power supplies in both control and expansion enclosures are hot-swappable and replaceable
without needing to shut down a node or cluster. If the power is interrupted in one node for less
than 5 seconds, the canister does not perform a fire hose dump and continues operation from
the battery. This feature is useful for a case of, for example, maintenance of UPS systems in
the data center or replugging the power to a different power source or PDU unit. A fully
charged battery can perform two fire hose dumps.

13.2 Shutting down the IBM FlashSystem


You can safely shut down the system by using the GUI or command-line interface (CLI).

Important: Never shut down your system by powering off the PSUs, removing both PSUs,
or removing both power cables from a running system. These actions can lead to
inconsistency or loss of the data that is staged in the cache.

Before shutting down the IBM FlashSystem server, stop all hosts that allocated volumes from
the device. This step can be skipped for hosts that have volumes that are also provisioned
with mirroring (host-based mirror) from different storage devices. However, doing so incurs
errors that are related to lost storage paths and disks on the host error log.

You can shut down only one node canister, or you can shut down the entire cluster. When you
shut down only one node canister, all activities remain active. When you shut down the entire
cluster, you power on locally to start the system.

13.2.1 Shutting down and powering on a complete infrastructure


When you shut down or power on the entire infrastructure (storage, servers, and
applications), follow a particular sequence for both the shutdown and the power-on actions.
Next, we describe an example sequence of a shutdown, and then a power-on of an
infrastructure that includes a IBM FlashSystem system.

Shutting down
To shut down the infrastructure, complete the following steps:
1. Shut down your servers and all applications.
2. Shut down your IBM FlashSystem systems:
a. Shut down the IBM FlashSystem by using the GUI or CLI.
b. Power off both switches of the controller enclosure.
c. Power off both switches of all the expansion enclosures.
3. Shut down your storage area network (SAN) switches.

Powering on
To power on your infrastructure, complete the following steps:
1. Power on your SAN switches and wait until the start completes.
2. Power on your storage systems by completing the following steps:
a. Power on both power supplies of all the expansion enclosures.
b. Power on both power supplies of the control enclosure.
c. When the storage systems are up, power on your servers and start your applications.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 781
13.3 Removing or adding a node from or to the system
There are situations where IBM Support might ask you to remove a node from the system
briefly. One typical use case is when a node becomes stuck during a code upgrade. You can
remove the node from the cluster briefly to commit the upgrade and complete (or cancel) the
procedure depending on how many nodes have upgraded so far. This procedure should be
done only under the direction of IBM Support.

The easiest way to do this task is running svcinfo lsnode to display all nodes and their ID
and status, as shown in Figure 13-9. You can make sure that each IOgroup has two nodes
online (or that if you remove a node, that one node remains in the IOgroup to continue serving
I/O).

Figure 13-9 The lsnode output

In this example, we remove node 1 from the cluster. Run the svctask rmnode 1 command, as
shown in Example 13-1.

Example 13-1 The rmnode command


ssh superuser@[Link]
Password:
IBM_Storwize:fab2shared1-cl:superuser>svctask rmnode 1

A node can also be removed by using the GUI. Complete the following steps:
1. Select Monitoring → System, and then select the relevant control enclosure that the
node you want to remove is on, which opens the Enclosure Details window. Select the
node and either right-click it and click Remove, or use the menu in the Components
Details to remove it, as shown in Figure 13-10, which opens a confirmation window.

Figure 13-10 Removing a node by using the GUI

782 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
After you remove the node, if you rerun svcinfo lsnode, you see that it disappeared from
the cluster, as shown in Figure 13-11. The Service Assistant Tool (SAT) and GUI also
reflect that there are now only three nodes in the cluster.

Figure 13-11 The lsnode output after removing a node

Note: By default, the cache is flushed before the node is deleted to prevent data loss if
a failure occurs on the other node in the I/O group. This flush incurs a delay after you
remove a node to when it comes back up as candidate status.

2. After a brief period, check the SAT, which shows that the node that you removed is in the
service or candidate status, as shown in Figure 13-12.

Figure 13-12 Service Assistant Tool post-node removal

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 783
3. Select the radio button for the node that is in service and then select Exit Service State
from the Actions menu. Click GO, and a confirmation window opens, as shown in
Figure 13-13.

Figure 13-13 Exit service state

4. A confirmation window opens and shows that the node exited the service state. Click OK,
or close the window and click Refresh under the list of the nodes. The new node now
shows the Candidate status, as shown in Figure 13-14.

Figure 13-14 Node in the Candidate status

5. The node should automatically readd itself to the system. If not, look at the numbers in the
Panel column and go back to your CLI session. Run the addnode command and specify
the panel ID to add the node back into the cluster, as shown in Example 13-2.

Example 13-2 The addnode command


ssh superuser@[Link]
Password:
IBM_Storwize:fab2shared1-cl:superuser>svctask addnode -panelname 7822PBR-1
-iogrp io_grp0

6. Run svcinfo lsnode again or check the SAT to ensure that the node was added back, as
shown in Figure 13-15 on page 785.

784 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-15 The node is back in the cluster

Note: If you want to remove an entire control enclosure from the cluster to reduce the size
of the cluster or to decommission it, you can do this task by using the GUI. Go to the
Enclosure Overview window, as shown in Figure 13-10 on page 782, but instead of
selecting a node, select Enclosure Actions and then Remove. A confirmation window
opens. This action runs the rmnode command against both nodes in the control enclosure.
For more information about removing an enclosure, see IBM Knowledge Center and
search for “Removing a control enclosure and its expansion enclosures”.

13.4 Configuration backup


You can download and save the configuration backup file by using the IBM FlashSystem GUI
or CLI. On an ad hoc basis, manually perform this procedure because it can save the file
directly to your workstation. The CLI option requires you to log in to the system and download
the dumped file by using specific Secure Copy Protocol (SCP). The CLI option is a best
practice for an automated backup of the configuration.

Important: Generally, perform a daily backup of the IBM FlashSystem configuration


backup file, for which the best approach is to automate this task. Always perform another
backup before any critical maintenance task, such as an update of the IBM Spectrum
Virtualize Software version.

The backup file is updated by the cluster every day. Saving it after any changes to your
system configuration is important. It contains configuration data of arrays, pools, volumes,
and other items. The backup does not contain any data from the volumes.

To successfully perform the configuration backup, the following prerequisites must be met:
򐂰 All nodes are online.
򐂰 No independent operations that change the configuration can be running in parallel.
򐂰 No object name can begin with an underscore.

Important: Ad hoc backup of configuration can be done only from the CLI by using the
svcconfig backup command. Then, the output of the command can be downloaded by
using SCP or GUI.

13.4.1 Backing up by using the CLI


You can use the CLI to trigger configuration backups manually or by a regular automated
process. The svcconfig backup command generates a new backup file. Triggering a backup
by using the GUI is not possible. However, you might choose to save the automated 1 AM
cron backup if you have not made any configuration changes.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 785
Example 13-3 shows how to use the svcconfig backup command to generate an ad hoc
backup of the current configuration.

Example 13-3 Saving the configuration by using the CLI


IBM_Storwize:ITSO-V7k:superuser>svcconfig backup
..................................................................................
..............................................
CMMVC6155I SVCCONFIG processing completed successfully
IBM_Storwize:ITSO-V7k:superuser>

The svcconfig backup command generates three files that provide information about the
backup process and cluster configuration. These files are dumped into the /tmp directory on
the configuration node. Run the lsdumps command to list them (see Example 13-4).

Example 13-4 Listing the backup files by using the CLI


IBM_Storwize:ITSO-V7k:superuser>lsdumps |grep backup
187 [Link].bak_7822DFF-1
208 [Link].xml_7822DFF-1
209 [Link].sh_7822DFF-1
210 [Link].log_7822DFF-1
IBM_Storwize:ITSO-V7k:superuser>

Note: The [Link] file is a previous copy of the configuration, and not part
of the current backup.

Table 13-11 lists the three files that are created by the backup process.

Table 13-11 Files that are created by the backup process


File name Description

[Link] This file contains your cluster configuration data.

[Link] This file contains the names of the commands that ran to create the
backup of the cluster.

[Link] This file contains details about the backup, including any error
information that might have been reported.

Save the current backup to a secure and safe location. The files can be downloaded by
running scp (UNIX) or pscp (Microsoft Windows), as shown in Example 13-5. Replace the IP
address with the cluster IP address of your system and specify a local folder on your
workstation. In this example, we save to C:\V7000Backup.

Example 13-5 Saving the config backup files to your workstation


C:\putty>pscp -unsafe
superuser@[Link]:/dumps/[Link].* c:\V7000backup
Using keyboard-interactive authentication.
Password:
[Link].bak_782 | 133 kB | 33.5 kB/s | ETA: [Link] | 100%
[Link].log_782 | 16 kB | 16.8 kB/s | ETA: [Link] | 100%
[Link].sh_7822 | 5 kB | 5.9 kB/s | ETA: [Link] | 100%
[Link].xml_782 | 105 kB | 52.8 kB/s | ETA: [Link] | 100%

786 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
C:\putty>

C:\>dir V7000backup
Volume in drive C has no label.
Volume Serial Number is 0608-239A

Directory of C:\V7000backup

24.10.2018 10:57 <DIR> .


24.10.2018 10:57 <DIR> ..
24.10.2018 10:57 137.107 [Link].bak_7822DFF-1
24.10.2018 10:57 17.196 [Link].log_7822DFF-1
24.10.2018 10:57 6.018 [Link].sh_7822DFF-1
24.10.2018 10:58 108.208 [Link].xml_7822DFF-1
4 File(s) 268.529 bytes
2 Dir(s) [Link] bytes free

C:\>

Using the -unsafe option enables you to use the wildcard for downloading all the
[Link] files with a single command.

Tip: If you encounter the Fatal: Received unexpected end-of-file from server error,
when running the pscp command, consider upgrading your version of PuTTY.

13.4.2 Saving the backup by using the GUI


Although it is not possible to generate an ad hoc backup, you can save the backup files by
using the GUI. To do so, complete the following steps:
1. Select Settings → Support → Support Package.
2. Click the Manual Upload Instructions twistie to expand it.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 787
3. Click Download Existing Package, as shown in Figure 13-16.

Figure 13-16 Download Existing Package

The Support Package selection window opens, as shown in Figure 13-17.

Figure 13-17 Support Package Selection

788 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4. Filter the view by clicking in the Filter box, entering backup, and pressing Enter, as shown
in Figure 13-18.

Figure 13-18 Filtering specific files for download

Note: You must select the configuration node in the upper left drop-down menu
because the backup files are stored there.

5. Select all the files to include in the compressed file, and then click Download. Depending
on your browser preferences, you might be prompted about where to save the file,
otherwise it downloads to your defined download directory.

13.5 Software update


This section describes the operations to update your system software to Version 8.3.1.

The format for the software update package name ends in four positive integers that are
separated by dots. For example, a software update package might have the following name:
IBM_2076_INSTALL_8.3.1.0

13.5.1 Precautions before the update


This section describes the precautions that you should take before you attempt an update.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 789
Important: Before you attempt any code update, read and understand the concurrent
compatibility and code cross-reference matrix for your system. For more information, see
Concurrent Compatibility and Code Cross Reference for Storwize V7000, and click Latest
system code.

During the update, each node in the IBM FlashSystem clustered system is automatically shut
down and restarted by the update process. Because each node in an I/O group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to make sure that all I/O
paths between all hosts and SANs work.

If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when the IBM FlashSystem node that provides that access is shut
down during the update process. You can check the I/O paths by running datapath query
SDD commands.

13.5.2 IBM FlashSystem update test utility


The software update test utility is a IBM FlashSystem Software utility that checks for known
issues that can cause problems during a software update. For more information about the
utility, see Software Upgrade Test Utility. Download the software update utility from this page,
where you can also download the firmware. This procedure ensures that you receive the
current version of this utility. You can use the svcupgradetest utility to check for known issues
that might cause problems during a software update.

The software update test utility can be downloaded in advance of the update process.
Alternately, it can be downloaded and run directly during the software update, as guided by
the update wizard.

You can run the utility multiple times on the same system to perform a readiness check-in in
preparation for a software update. Run this utility for a final time immediately before you apply
the software update to ensure that there were no new releases of the utility since it was
originally downloaded.

The installation and use of this utility is nondisruptive, and it does not require restart of any
IBM FlashSystem nodes. Therefore, there is no interruption to host I/O. The utility is installed
only on the current configuration node.

System administrators must continue to check whether the version of code that they plan to
install is the latest version. For the most current information, see Concurrent Compatibility
and Code Cross Reference for Storwize V7000.

This utility is intended to supplement rather than duplicate the tests that are performed by the
IBM Spectrum Virtualize update procedure (for example, checking for unfixed errors in the
error log).

A concurrent software update of all components is supported through the standard Ethernet
management interfaces. However, most of the configuration tasks are restricted during the
update process.

790 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.5.3 Updating your IBM FlashSystem to Version 8.3.1
To update the IBM Spectrum Virtualize Software to Version 8.3.1, complete the following
steps:
1. Open a supported web browser and go to your cluster IP address. A login window opens
(see Figure 13-19).

Figure 13-19 Pre-Version 8.1 GUI login window

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 791
2. Log in by using the superuser credentials. The management home window opens. Hover
the cursor over Settings and click System (see Figure 13-20).

Figure 13-20 Settings menu

792 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. In the System menu, click Update System. The Update System window opens (see
Figure 13-21).

Figure 13-21 Update System window

4. From this window, you can select to run the update test utility and continue with the code
update or run the test utility. For this example, we click Test and Update.

My Notifications: Use the My Notifications tool to receive notifications of new and


updated support information to better maintain your system environment, especially in
an environment where a direct internet connection is not possible.

See My Notifications (an IBM account is required) to add your system to the
notifications list to be advised of support information and to download the current code
to your workstation for later upload.

5. Because you downloaded both files from Concurrent Compatibility and Code Cross
Reference for Storwize V7000, you can click each folder, browse to the location where you
saved the files, and upload them to the system. If the files are correct, the GUI detects and
updates the target code level, as shown in Figure 13-22.

Figure 13-22 Upload option for both the test utility and update package

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 793
6. Select the type of update you want to perform, as shown in Figure 13-23. Select
Automatic update unless IBM Support suggests Service Assistant Manual update. The
manual update might be preferable in cases where misbehaving host multipathing is
known to cause loss of access. Click Finish to begin the update package upload process.

Figure 13-23 The update type selection

When updating from Version 8.1 or later, another window opens, in which you can choose
a fully automated update, one that pauses when half the nodes complete the update, or
one that pauses after each node update, as shown in Figure 13-24. The pause option
requires that you click Resume to continue the update after each pause. Click Finish.

Figure 13-24 New Version 8.1 update pause options

7. After the update packages upload, the update test utility looks for any known issues that
might affect a concurrent update of your system. Click Read more (see Figure 13-25 on
page 795).

794 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-25 Issues that are detected by the update test utility

The results pane opens and shows you what issues were detected (see Figure 13-26). In
our example, the system identified an error that one or more drives in the system are
running microcode with a known issue and a warning that email notification (Call Home) is
not enabled. Although this issue is not a recommended condition, it does not prevent the
system update from running. Therefore, we click Close and proceed with the update.
However, you might need to contact IBM Support to help resolve more serious issues
before continuing.

Figure 13-26 Description of the warning from the test utility

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 795
8. Click Resume in the Update System window and the update proceeds, as shown in
Figure 13-27.

Figure 13-27 Resuming the update

Note: Because the utility detects issues, another warning appears to ensure that you
investigated them and are certain that you want to proceed. When you are ready to
proceed, click Yes.

9. The system begins updating the IBM Spectrum Virtualize Software by taking one node
offline and installing the new code. This process takes approximately 20 minutes. After the
node returns from the update, it is listed as complete, as shown in Figure 13-28.

Figure 13-28 Update process paused for host path recovery

[Link] a 30-minute pause, a node failover occurs and you temporarily lose connection to the
GUI to ensure that multipathing recovered on all attached hosts. A warning window opens
and prompts you to refresh the current session, as shown in Figure 13-29 on page 797.

Tip: If you are updating from Version 7.8 or later, the 30-minute wait period can be
adjusted by running applysoftware -delay (mins) parameter to begin the update
instead of using the GUI.

796 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-29 Node failover

You now see the new Version 8.3.1 GUI and the status of the second node updating, as
shown in Figure 13-30.

Figure 13-30 New GUI after node failover

After the second node completes, the update is committed to the system, as shown in
Figure 13-31.

Figure 13-31 Updating the system level

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 797
The update process completes when all nodes and the system unit are committed. The
final status indicates the new level of code that is installed in the system.

Note: If your nodes have more than 64 GB of memory before updating to Version 8.1+,
each node posts an 841 error after the update completes.

Because Version 8.1+ allocates memory differently, the memory must be accepted by
running the fix procedure for the event or svctask chnodehw <id> for each node.
However, run this command only for one node at a time. For more information, see IBM
Knowledge Center.

13.5.4 Updating the IBM FlashSystem drive code


After completing the software update as described in 13.5, “Software update” on page 789,
the firmware of the disk drives in the system also must be updated. The upgrade test utility
identified that earlier drives are in the system, as shown in Figure 13-32. However, this fact
does not stop the system software update from being performed.

Figure 13-32 Upgrade Test Utility drive firmware warning

798 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To update the drive code, complete the following steps:
1. Download the latest drive firmware package from IBM Fix Central. Make sure that you
select the correct product.
2. On the GUI, select Pools → Internal Storage and select All Internal Storage.
3. Click Actions and select Upgrade all, as shown in Figure 13-33.

Figure 13-33 Upgrading all internal drives

Tip: The Upgrade all action displays only if you did not select any individual drive in the
list. If you clicked an individual drive in the list, the action gives you individual drive
actions; selecting Upgrade upgrades only that drive’s firmware. You can clear an
individual drive by pressing Ctrl and clicking the drive again.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 799
4. The Upgrade All Drives window opens, as shown in Figure 13-34, in which you click the
small folder at the right side of the Upgrade package drop-down menu to go to where you
saved the downloaded file in step 1 on page 799. Click Upgrade to upload the firmware
package and begin upgrading any drives that are earlier. Do not select the option to install
firmware, even if the drive is running a newer level. Do that only under guidance from IBM
Support.

Figure 13-34 Selecting the drive upgrade package

Note: The system upgrades member drives one at a time. Although the firmware
upgrades are concurrent, they do cause a brief reset to the drive. However, the
redundant array of independent disks (RAID) technology enables the system to
continue after this brief interruption. After a drive completes its update, a calculated wait
time exists before the next drive updates to ensure that the previous drive is stable after
upgrading and can vary on system load.

5. With the drive upgrades running, you can view the progress by clicking the Tasks icon and
clicking View for the Drive Upgrade running task, as shown in Figure 13-35.

Figure 13-35 Selecting Drive Upgrade running task view

800 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The Drive upgrade running task window opens. The drives that are pending upgrade and
an estimated time of completion are visible, as shown in Figure 13-36.

Figure 13-36 Drive upgrade progress for a single drive upgrade

6. You can view each drive’s firmware level in the Pools Internal Storage All Internal window
by enabling the drive firmware option after right-clicking in the column header line, as
shown in Figure 13-37.

Figure 13-37 Viewing drive firmware levels

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 801
With the Firmware Level column enabled, you can see the current level of each drive, as
shown in Figure 13-38.

Figure 13-38 Drive firmware display

13.5.5 Manually updating the system


This example assumes that you have an 8-node cluster, as shown in Table 13-12.

Table 13-12 The iogrp setup


iogrp (0) iogrp (1) iogrp (2) iogrp (3)

Node 1 (config node) Node 3 Node 5 Node 7

Node 2 Node 4 Node 6 Node 8

After uploading the update utility test and software update package to the cluster by using
PSCP and running the utility test, complete the following steps:
1. Start by removing node 2, which is the partner node of the configuration node in iogrp 0,
by using the cluster GUI or CLI.
2. Log in to the service GUI to verify that the removed node is in the candidate status.
3. Select the candidate node and click Update Manually from the left pane.
4. Browse and find the code that you downloaded and saved to your PC.
5. Upload the code and click Update.
When the update completes, a message caption indicating software update completion
displays. The node then restarts, and appears again in the service GUI (after
approximately 20 - 25 minutes) in the candidate status.
6. Select the node and verify that it is updated to the new code.
7. Add the node back by using the cluster GUI or the CLI.
8. Select node 3 from iogrp1.
9. Repeat steps 1 - 7 to remove node 3, update it manually, verify the code, and add it back
to the cluster.
[Link] to node 5 in iogrp 2.

802 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
[Link] steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.
[Link] on to node 7 in iogrp 3.
[Link] steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.

Note: The update is 50% complete. You now have one node from each iogrp that is
updated with the new code manually. Always leave the configuration node for last
during a manual software update.

[Link] node 4 from iogrp 1.


[Link] steps 1 - 7 to remove node 4, update it manually, verify the code, and add it back
to the cluster.
[Link] node 6 from iogrp 2.
[Link] steps 1 - 7 to remove node 6, update it manually, verify the code, and add it back
to the cluster.
[Link] node 8 in iogrp 3.
[Link] steps 1 - 7 to remove node 8, update it manually, verify the code, and add it back
to the cluster.
[Link] and remove node 1, which is the configuration node in iogrp 0.

Note: A partner node becomes the configuration node because the original
configuration node is removed from the cluster, which keeps the cluster manageable.

The removed configuration node becomes a candidate, and you do not have to apply the
code update manually. Add the node back to the cluster. It automatically updates itself and
then adds itself back to the cluster with the new code.
[Link] all the nodes are updated, you must confirm the update to complete the process. The
confirmation restarts each node in order, which takes about 30 minutes to complete.

The update is complete.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 803
13.6 Health checker feature
The IBM Spectrum Control health checker feature runs in IBM Cloud. Based on the weekly
Call Home inventory reporting, the health checker proactively creates recommendations.
These recommendations are provided at IBM Call Home Web, which is found at Call Home
Web (login required). Select Support → My support → Call Home Web (see Figure 13-39).

Figure 13-39 Call Home Web on [Link]

For a video guide about how to set up and use IBM Call Home Web, see this YouTube web
page.

Another feature is the Critical Fix Notification function, which enables IBM to warn users that
a critical issue exists in the level of code that they are using. The system notifies users when
they log on to the GUI by using a web browser that is connected to the internet.

Consider the following information about this function:


򐂰 It warns users only about critical fixes, and does not warn them that they are running a
previous version of the software.
򐂰 It works only if the browser also has access to the internet. The system itself does not
need to be connected to the internet.
򐂰 The function cannot be disabled. Each time that it displays a warning, it must be
acknowledged (with the option to not warn the user again for that issue).

The decision about what is a critical fix is subjective and requires judgment, which is
exercised by the development team. As a result, clients might still encounter bugs in code that
were not deemed critical. They continue to review information about new code levels to
determine whether they must update, even without a critical fix notification.

Important: Inventory notification must be enabled and operational for these features to
work. It is a best practice to enable Call Home and Inventory reporting on your
IBM Spectrum Virtualize clusters.

804 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.7 Troubleshooting and fix procedures
The management GUI of IBM FlashSystem is a browser-based GUI for configuring and
managing all aspects of your system. It provides extensive facilities to help troubleshoot and
correct problems. This section explains how to effectively use its features to avoid service
disruption of your system.

Figure 13-40 shows the Monitoring menu icon for System information, viewing Events, or
seeing real-time Performance statistics.

Figure 13-40 Monitoring options

Use the management GUI to manage and service your system. Select Monitoring → Events
to list events that should be addressed and maintenance procedures that walk you through
the process of correcting problems. Information in the Events window can be filtered four
ways:
򐂰 Recommended Actions
Shows only the alerts that require attention. Alerts are listed in priority order and should be
resolved sequentially by using the available fix procedures. For each problem that is
selected, you can perform the following tasks:
– Run a fix procedure.
– View the properties.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 805
򐂰 Unfixed Alerts
Displays only the alerts that are not fixed. For each entry that is selected, you can perform
the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.
򐂰 Unfixed Messages and Alerts
Displays only the alerts and messages that are not fixed. For each entry that is selected,
you can perform the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.
򐂰 Show All
Displays all event types whether they are fixed or unfixed. For each entry that is selected,
you can perform the following tasks:
– Run a fix procedure.
– Mark an event as fixed.
– Filter the entries to show them by specific minutes, hours, or dates.
– Reset the date filter.
– View the properties.

Some events require a certain number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are below the coalesce threshold, and are transient.

Important: The management GUI is the primary tool that is used to operate and service
your system. Real-time monitoring should be established by using SNMP traps, email
notifications, or syslog messaging in an automatic manner.

806 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.7.1 Managing the event log
Regularly check the status of the system by using the management GUI. If you suspect a
problem, first use the management GUI to diagnose and resolve the problem.

Use the views that are available in the management GUI to verify the status of the system, the
hardware devices, the physical storage, and the available volumes by completing the
following steps:
1. Select Monitoring → Events to see all problems that exist on the system (see
Figure 13-41).

Figure 13-41 Messages in the event log

2. Select Recommended Actions from the drop-down list to display the most important
events to be resolved (see Figure 13-42). The Recommended Actions tab shows the
highest priority maintenance procedure that must be run. Use the troubleshooting wizard
so that the system can determine the proper order of maintenance procedures.

Figure 13-42 Recommended Actions

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 807
In this example, there is a login that is excluded (service error code 1231). At any time and
from any GUI window, you can directly go to this menu by clicking the Status Alerts icon
at the top of the GUI (see Figure 13-43).

Figure 13-43 Status alerts

13.7.2 Running a fix procedure


If an error code exists for the alert, run the fix procedure to help you resolve the problem.
These fix procedures analyze the system and provide more information about the problem.
They suggest actions to take and walk you through the actions that automatically manage the
system where necessary while ensuring availability. Finally, they verify that the problem is
resolved.

If an error is reported, always use the fix procedures from the management GUI to resolve the
problem for both software configuration problems and hardware failures. The fix procedures
analyze the system to ensure that the required changes do not cause volumes to become
inaccessible to the hosts. The fix procedures automatically perform configuration changes
that are required to return the system to its optimum state.

The fix procedure displays information that is relevant to the problem, and it provides various
options to correct the problem. Where possible, the fix procedure runs the commands that are
required to reconfigure the system.

Note: After Version 7.4, you are no longer required to run the fix procedure for a failed
drive. Hot plugging a replacement drive automatically triggers the validation processes.

808 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
The fix procedure also checks that any other existing problems do not result in volume access
being lost. For example, if a PSU in a node enclosure must be replaced, the fix procedure
checks and warns you if the integrated battery in the other PSU is not sufficiently charged to
protect the system.

Hint: Always use Run Fix, which resolves the most serious issues first. Often, other alerts
are corrected automatically because they were the result of a more serious issue.

The following example demonstrates how to clear the error that is related to an excluded FC
login, likely due to errors along the link:
1. From the GUI menu on the left, select Monitoring → Events, and list only the
recommended actions by using the Actions menu (see Figure 13-44). Click Run Fix.

Figure 13-44 Initiate the Run Fix procedure from the management GUI

2. The next window shows you which login has the problem (see Figure 13-45). Click Next.

Figure 13-45 Determination of the problem

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 809
3. In the next window (see Figure 13-46), you see an error count, which is a history of login
excluded errors against the port. You can see whether this port has a history of being a
problematic link. In this case, it is a single error, and you checked connectivity, which
seems to be working correctly, so you can select Mark this event as fixed or Mark all the
1231 events as fixed.... Then, click Next.

Figure 13-46 Marking the error as fixed

4. The next window (see Figure 13-47) shows a confirmation that the event was marked as
fixed. Click Close to exit.

Figure 13-47 Confirmation that the event was fixed

5. The event is marked as fixed, and you can safely finish the fix procedure. Click Close and
the event is removed from the list of events (see Figure 13-48).

Figure 13-48 Pane showing that here are no more excluded local logins

810 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Resolving alerts in a timely manner
To minimize any impact to your host systems, always perform the recommended actions as
quickly as possible after a problem is reported. Your system is resilient to most single
hardware failures. However, if it operates for any period with a hardware failure, the possibility
increases that a second hardware failure can result in some volume data that is unavailable. If
several unfixed alerts exist, fixing any one alert might become more difficult because of the
effects of the others.

13.7.3 Event log details


Multiple views of the events and recommended actions are available (see Figure 13-49).
When you click the column icon at the right end of the table heading, a menu for the column
choices opens.

Figure 13-49 Grid options of the event log

Select or remove columns as needed. You can also extend or shrink the width of columns to
fit your window resolution and size. This method is relevant for most panes in the
management GUI of an IBM FlashSystem system.

Every field of the event log is available as a column in the event log grid. Several fields are
useful when you work with IBM Support. The preferred method in this case is to use the Show
All filter, with events sorted by time stamp. All fields have the sequence number, event count,
and the fixed state. Clicking Restore Default View sets the grid back to the defaults.

You might want to see more details about each critical event. Some details are not shown in
the main grid. To access the properties and sense data of a specific event, double-click the
specific event anywhere in its row.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 811
The properties window opens (see Figure 13-50) with all the relevant sense data. This data
includes the first and last time of an event occurrence, number of times the event occurred,
worldwide port name (WWPN), worldwide node name (WWNN), enabled or disabled
automatic fix, and other information.

Figure 13-50 Event sense data and properties

For more information about troubleshooting options, search for “Troubleshooting” at IBM
Knowledge Center.

13.8 Monitoring
An important step is to correct any issues that are reported by your system as soon as
possible. Configure your system to send automatic notifications to a standard Call Home
server or to the new Cloud Call Home server when a new event is reported. To avoid having
to monitor the management GUI for new events, select the type of event for which you want to
be notified. For example, you can restrict notifications to only events that require action. The
following event notification mechanisms are available:
򐂰 Call Home
An event notification can be sent to one or more email addresses. This mechanism notifies
individuals of problems. Individuals can receive notifications wherever they have email
access, including mobile devices.
򐂰 Cloud Call Home
Cloud services for Call Home is the optimal transmission method for error data because it
ensures that notifications are delivered directly to the IBM Support Center.

812 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 SNMP
An SNMP traps report can be sent to a data center management system, such as
IBM Systems Director, which consolidates SNMP reports from multiple systems. With this
mechanism, you can monitor your data center from a single workstation.
򐂰 Syslog
A syslog report can be sent to a data center management system that consolidates syslog
reports from multiple systems. With this option, you can monitor your data center from a
single location.

If your system is within warranty or if you have a hardware maintenance agreement, configure
your IBM FlashSystem system to send email events directly to IBM if an issue that requires
hardware replacement is detected. This mechanism is known as Call Home. When this event
is received, IBM automatically opens a problem report and, if appropriate, contacts you to
help resolve the reported problem.

Important: If you set up Call Home to IBM, ensure that the contact details that you
configure are correct and kept updated. Personnel changes can cause delays in IBM
making contact.

Cloud Call Home is designed to work with new service teams and improves connectivity and
ultimately should improve customer support.

Note: If the customer does not want to open the firewall, Cloud Call Home does not work
and the customer can disable Cloud Call Home. Call Home is used instead.

13.8.1 Email notifications and the Call Home function


The Call Home function of IBM FlashSystem uses the email notification that is sent to the
specific IBM Support Center. Therefore, the configuration is like sending emails to the specific
person or system owner.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 813
The following procedure summarizes how to configure email notifications and emphasizes
what is specific to Call Home:
1. Prepare your contact information that you want to use for the email notification and verify
the accuracy of the data. From the GUI menu, select Settings → Support (see
Figure 13-51).

Figure 13-51 Support menu

814 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2. Select Call Home, and then click Enable Notifications (see Figure 13-52). For more
information, see IBM Knowledge Center.
For the correct functioning of email notifications, ask your network administrator if Simple
Mail Transfer Protocol (SMTP) is enabled on the management network and is not, for
example, blocked by firewalls. Be sure to test the accessibility to the SMTP server by using
the telnet command (port 25 for a non-secured connection, port 465 for Secure Sockets
Layer (SSL)-encrypted communication) by using any server in the same network segment.

Figure 13-52 Configuration of Call Home notifications

Figure 13-53 shows the option to enable Cloud Call Home.

Figure 13-53 Cloud Home service

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 815
3. After clicking Next on the Welcome window, enter the information about the location of the
system (see Figure 13-54) and contact information of the system administrator (see
Figure 13-55) to be contacted by IBM Support. Always keep this information current.

Figure 13-54 Location of the device

Figure 13-55 shows the contact information of the owner.

Figure 13-55 Contact information

In the next window, you can enable Inventory Reporting and Configuration Reporting, as
shown in Figure 13-56 on page 817.

816 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-56 Inventory Reporting and Configuration Reporting

4. Configure the SMTP server according to the instructions that are shown in Figure 13-57.
When the correct SMTP server is provided, you can test the connectivity by clicking Ping
to verify that it can be contacted. Then, click Apply and Next.

Figure 13-57 Configuring email servers and inventory reporting

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 817
5. A summary window opens. Verify all the information, and then click Finish. You are
returned to the Email Settings window where you can verify the email addresses of
IBM Support (callhome0@[Link]) and optionally add local users who also need to
receive notifications (see Figure 13-58).

Figure 13-58 Setting email recipients and alert types

The default support email address callhome0@[Link] is predefined by the system to


receive Error Events and Inventory. Do not change these settings or disable the 7 day
reporting interval at the bottom of the Settings window.
You can modify or add local users by using Edit mode after the initial configuration is
saved.
The Inventory Reporting function is enabled by default for Call Home. Rather than
reporting a problem, an email is sent to IBM that describes your system hardware and
critical configuration information. Object names and other information, such as IP
addresses, are not included. By default, the inventory email is sent weekly, which allows
an IBM Cloud service to analyze the inventory email and inform you whether the hardware
or software that you are using requires an update because of any known issue, as
described in 13.6, “Health checker feature” on page 804.
Figure 13-58 shows the configured email notification and Call Home settings.
6. After completing the configuration wizard, test the email function. To do so, enter Edit
mode, as shown in Figure 13-59. In the same window, you can define more email
recipients or alter any contact and location details as needed.

Figure 13-59 Entering Edit mode

818 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
7. In Edit mode, you can change any of the previously configured settings. After you are
finished editing these parameters, adding more recipients, or testing the connection, save
the configuration so that the changes take effect (see Figure 13-60).

Figure 13-60 Saving modified configuration

Note: The Test button appears for new email users after first saving and then editing again.

Disabling and enabling notifications


At any time, you can temporarily or permanently disable email notifications, as shown in
Figure 13-61. This is best practice when performing activities in your environment that might
generate errors on IBM Spectrum Virtualize, such as SAN reconfiguration or replacement
activities. After the planned activities, remember to re-enable the email notification function.
The same results can be achieved by running the svctask stopmail and svctask startmail
commands.

Figure 13-61 Disabling or enabling email notifications

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 819
13.8.2 Remote Support Assistance
Introduced with Version 8.1, Remote Support Assistance enables IBM Support to remotely
connect to an IBM FlashSystem system through a secure tunnel to perform analysis, log
collection, and software updates. The tunnel can be enabled ad hoc by the client or as a
permanent connection.

Note: Clients who purchased Enterprise Class Support (ECS) are entitled to IBM Support
by using Remote Support Assistance to quickly connect and diagnose problems. However,
IBM Support might choose to use this feature on non-ECS systems at their discretion.
Therefore, configure and test the connection on all systems.

If you are enabling Remote Support Assistance, ensure that the following prerequisites are
met:
򐂰 Cloud Call Home or a valid email server are configured (Cloud Call Home is used as the
primary method to transfer the token when you initiate a session, with email as backup).
򐂰 A valid service IP address is configured on each node in the system.
򐂰 If your IBM FlashSystem system is behind a firewall or if you want to route traffic from
multiple storage systems to the same place, you must configure a Remote Support Proxy
server. Before you configure Remote Support Assistance, the proxy server must be
installed and configured separately. During the setup for Support Assistance, specify the
IP address and the port number for the proxy server on the Remote Support Centers
window.
򐂰 If you do not have firewall restrictions and the nodes are directly connected to the internet,
request your network administrator to allow connections to [Link] and
[Link] on Port 22.
򐂰 Uploading support packages and downloading software have direct connections to the
internet. A DNS server must be defined on your system for both of these functions to work.
򐂰 To ensure that support packages are uploaded correctly, configure the firewall to allow
connections to the following IP addresses on port 443: [Link], [Link],
and [Link].
򐂰 To ensure that software is downloaded correctly, configure the firewall to allow connections
to the following IP addresses on port 22: [Link], [Link],
[Link], [Link], [Link], and [Link].

Figure 13-62 shows a window that opens in the GUI after updating to Version 8.3.1. It
prompts you to configure your system for Remote Support. You can select to not enable it,
open a tunnel when needed, or open a permanent tunnel to IBM.

Figure 13-62 Prompt to configure Remote Support Assistance

820 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
You can choose to configure IBM SAN Volume Controller , learn more information about the
feature, or close the window by clicking X. Figure 13-63 shows how you can find Setup
Remote Support Assistance if you closed the window.

Figure 13-63 Support Assistance menu

Choosing to set up Support Assistance opens a wizard to guide you through the following
configuration process:
1. Figure 13-64 shows the first wizard window. To keep remote assistance disabled, select I
want support personnel to work on-site only. To enable remote assistance, select I
want support personnel to access my system both on-site and remotely. Click Next.

Figure 13-64 Support wizard enable or disable

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 821
Note: Selecting I want support personnel to work on-site only does not entitle you
to expect IBM Support to attend onsite for all issues. Most maintenance contracts are
for customer-replaceable unit (CRU) support, where IBM diagnoses your problem and
sends a replacement component for you to install, if required.

If you prefer to have IBM perform replacement tasks for you, contact your local sales
person to investigate an upgrade to your current maintenance contract.

2. Figure 13-65 lists the IBM Support Center IP addresses and Secure Shell (SSH) port that
must be open in your firewall. You can also define a Remote Support Assistance Proxy if
you have multiple systems in the data center, which allows for a firewall configuration
being required only for the proxy server rather than every storage system. In this example,
we do not have a proxy server and leave the field blank. Click Next.

Figure 13-65 Support wizard proxy setup

3. The next window prompts you about whether you want to open a tunnel to IBM
permanently, which allows IBM to connect to your system At Any Time, or On
Permission Only, as shown in Figure 13-66 on page 823. On Permission Only requires
a storage administrator to log on to the GUI and enable the tunnel when required. Click
Finish.

822 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-66 Support wizard access choice

4. After completing the remote support setup, you can view the status of any remote
connection, start a session, test the connection to IBM, and reconfigure the setup. As
shown in Figure 13-67, we successfully tested the connection. Click Start New Session
to open a tunnel through which IBM Support can connect.

Figure 13-67 Support Status and session management

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 823
5. A window prompts you for how long you want the tunnel to remain open if no activity
occurs by setting a timeout value. As shown in Figure 13-68, the connection is established
and waits for IBM Support to connect.

Figure 13-68 Remote Assistance tunnel connection

13.8.3 SNMP configuration


SNMP is a standard protocol for managing networks and exchanging messages. The system
can send SNMP messages that notify personnel about an event. You can use an SNMP
manager to view the SNMP messages that are sent by the system.

You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (see Figure 13-69 on page 825):
򐂰 IP Address
The address for the SNMP server.
򐂰 Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535, where the default is port 162 for SNMP.
򐂰 Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong. Typically, the default of public is used.
򐂰 Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that require prompt action.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action such as a space efficient volume running out of space.

824 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Important: Go to Recommended Actions to run the fix procedures on these
notifications.

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 13-69 SNMP configuration

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 825
To add an SNMP server, select Actions → Add and complete the Add SNMP Server window,
as shown in Figure 13-70. To remove an SNMP server, click the line with the server that you
want to remove, and select Actions → Remove.

Figure 13-70 Add SNMP Server

826 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: The following properties are optional:
򐂰 Engine ID
Indicates the unique identifier (UID) in hexadecimal that identifies the SNMP server.
򐂰 Security Name
Indicates which security controls are configured for the SNMP server. Supported
security controls are none, authentication, or authentication and privacy.
򐂰 Authentication Protocol
Indicates the authentication protocol that is used to verify the system to the SNMP
server.
򐂰 Privacy Protocol
Indicates the encryption protocol that is used to encrypt data between the system and
the SNMP server.
򐂰 Privacy Passphrase
Indicates the user-defined passphrase that is used to verify encryption between the
system and SNMP server.

13.8.4 Syslog notifications


The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.

You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (see Figure 13-71 on
page 828):
򐂰 IP Address
The IP address for the syslog server.
򐂰 Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
򐂰 Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
򐂰 Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Go to Recommended Actions to run the fix procedures on these


notifications.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 827
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.

Important: Go to Recommended Actions to run the fix procedures on these


notifications.

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 13-71 Syslog configuration

To remove a syslog server, click the Minus sign (-).

To add another syslog server, click the Plus sign (+).

The syslog messages can be sent in concise message format or expanded message format.

Example 13-6 shows a compact format syslog message.

Example 13-6 Compact syslog message example


IBM2076 #NotificationType=Error #ErrorID=077102 #ErrorCode=1091 #Description=Node
Double fan failed #ClusterName=V7000G2_1 #Timestamp=Wed Jul 02 [Link] 2017 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=120

Example 13-7 shows an expanded format syslog message.

Example 13-7 Full format syslog message example


IBM2076 #NotificationType=Error #ErrorID=077102 #ErrorCode=1091 #Description=Node
Double fan failed #ClusterName=V7000G2_1 #Timestamp=Wed Jul 02 [Link] 2017 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=120 #ObjectID=2
#NodeID=2 #MachineType=2076624#SerialNumber=1234567 #SoftwareVersion=[Link]
(build 13.4.1709291021000)#FRU=fan 31P1847
#AdditionalData(0->63)=00000000460000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000

828 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.9 Audit log
The audit log is useful when analyzing past configuration events, especially when trying to
determine, for example, how a volume ended up being shared by two hosts, or why the
volume was overwritten. The audit log is also included in the svc_snap support data to aid in
problem determination.

The audit log tracks action commands that are issued through an SSH session, management
GUI, or Remote Support Assistance. It provides the following entries:
򐂰 Identity of the user who ran the action command.
򐂰 Name of the actionable command.
򐂰 Timestamp of when the actionable command ran on the configuration node.
򐂰 Parameters that ran with the actionable command.

The following items are not documented in the audit log:


򐂰 Commands that fail are not logged.
򐂰 A result code of 0 (success) or 1 (success in progress) is not logged.
򐂰 Result object ID of node type (for the addnode command) is not logged.
򐂰 Views are not logged.

Several specific service commands are not included in the audit log:
򐂰 dumpconfig
򐂰 cpdumps
򐂰 cleardumps
򐂰 finderr
򐂰 dumperrlog
򐂰 dumpintervallog
򐂰 svcservicetak dumperrlog
򐂰 svcservicetask finderr

Figure 13-72 shows the access to the audit log. Click Audit Log in the left menu to see which
configuration CLI commands were run on the system.

Figure 13-72 Audit Log from the Access menu

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 829
Figure 13-73 shows an example of the audit log after a volume is created and mapped to a
host.

Figure 13-73 Audit log

Changing the view of the Audit Log grid is possible by right-clicking column headings or
clicking the sign in the upper right corner (see Figure 13-74). The grid layout and sorting is
under the user’s control, so you can view everything in the audit log, sort different columns,
and reset the default grid preferences.

Figure 13-74 How to change audit log column headings

830 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.10 Collecting support information by using the GUI, CLI, and
USB
If you encounter a problem and contact the IBM Support Center, you will most likely be asked
to provide a support package. You can collect and upload this package by selecting
Settings → Support menu.

13.10.1 Collecting information by using the GUI


To collect information by using the GUI, complete the following steps:
1. Select Settings → Support and then the Support Package tab (see Figure 13-75).

Figure 13-75 Support Package tab

2. Click Upload Support Package and then Create New Package and Upload.
Assuming that the problem that was encountered was an unexpected node restart that
logged a 2030 error, collect the default logs and the most recent statesave from each node
to capture the most relevant data for support.

Note: When a node unexpectedly restarts, it first dumps its current statesave
information before it restarts to recover from an error condition. This statesave is critical
for IBM Support to analyze what occurred. Collecting a snap type 4 creates statesaves
at the time of the collection, which is not useful for understanding the restart event.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 831
3. The Upload Support Package window provides four options for data collection. If you are
contacted by IBM Support because your system called home or you manually opened a
call with IBM Support, you receive a Problem Management Record (PMR) number. Enter
that PMR number into the PMR field and select the snap type (often referred to as an
option 1, 2, 3, 4 snap) as requested by IBM Support (see Figure 13-76). In our example,
we entered our PMR number, selected snap type 3 (option 3) because this automatically
collects the statesaves that were created at the time that the node restarted, and clicked
Upload.

Tip: To open a service request online, see the Service requests and PMRs.

Figure 13-76 Upload Support Package window

832 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
4. The procedure to generate the snap on the system, including the most recent statesave
from each node canister, starts. This process might take a few minutes (see Figure 13-77).

Figure 13-77 Task detail window

The time that it takes to generate the snap and the size of the file that is generated depends
mainly on two things: the snap option that you selected, and the size of your system. An
option 1 snap takes much less time than an option 4 snap because nothing new must be
gathered for an option 1 snap, but an option 4 snap requires the system to collect new
statesaves from each node. In an 8-node cluster, this task can take quite some time, so you
should always collect the snap option that IBM Support recommends.

Table 13-13 shows the approximate file sizes for each SNAP option.

Table 13-13 Types of snaps


Option Description Approximate size Approximate size
(One I/O group, 30 (Four I/O groups, 250
volumes) Volumes)

1 Standard logs 10 MB 340 MB

2 Standard logs plus one 50 MB 520 MB


existing statesave

3 Standard logs plus the most 90 MB 790 MB


recent statesave from each
node

4 Standard logs plus new 90 MB 790 MB


statesaves

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 833
13.10.2 Collecting logs by using the CLI
The CLI can be used to collect and upload a support package as requested by IBM Support
by performing the following steps:
1. Log in to the CLI and run the svc_snap command that matches the type of snap that is
requested by IBM Support:
– Standard logs (type 1):
svc_snap upload pmr=ppppp,bbb,ccc gui1
– Standard logs plus one existing statesave (type 2):
svc_snap upload pmr=ppppp,bbb,ccc gui2
– Standard logs plus most recent statesave from each node (type 3):
svc_snap upload pmr=ppppp,bbb,ccc gui3
– Standard logs plus new statesaves:
svc_livedump -nodes all -yes
svc_snap upload pmr=ppppp,bbb,ccc gui3
In this example, we collect the type 3 (option 3) and have it automatically uploaded to the
PMR number that is provided by IBM Support, as shown in Example 13-8.

Example 13-8 The svc_snap command


ssh superuser@[Link]
Password:
IBM_Storwize:ITSO-V7k:superuser>svc_snap upload pmr=12345,000,866 gui3

If you do not want to automatically upload the snap to IBM, do not specify the upload
pmr=ppppp,bbb,ccc part of the commands. When the snap creation completes, it creates a
file name that uses the following format:
/dumps/snap.<panel_id>.[Link]
It takes a few minutes for the snap file to complete (longer if statesaves are included).
The generated file can then be retrieved from the GUI by selecting Settings →
Support → Manual Upload Instructions twisty → Download Support Package, and
then clicking Download Existing Package, as shown in Figure 13-78 on page 835.

834 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-78 Downloaded Existing Package

2. Click in the Filter box and enter snap to see a list of snap files, as shown in Figure 13-79.
Find the exact name of the snap that was generated by running the svc_snap command
that was run earlier. Select that file, and click Download.

Figure 13-79 Filtering on snap to download

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 835
3. Save the file to a folder of your choice on your workstation.

13.10.3 Collecting logs by using a USB flash drive


As a backup, in case there is no connectivity to CLI and GUI (rare), it is possible to get a snap
from a node from the USB ports on the rear.

Note: This procedure collects a single snap from the node canister, not a cluster snap. It is
useful for determining the state of the node canister.

When a USB flash drive is plugged into a node canister, the canister code searches for a text
file that is named [Link] in the root directory. If the code finds the file, it attempts to run a
command that is specified in the file. When the command completes, a file that is called
satask_result.html is written to the root directory of the USB flash drive. If this file does not
exist, it is created. If it exists, the data is inserted at the start of the file. The file contains the
details and results of the command that was run and the status and the configuration
information from the node canister. The status and configuration information matches the
detail that is shown on the service assistant home page windows.

To collect a snap, complete the following steps:


1. Ensure that your USB drive is formatted with an FAT32 file system on its first partition.
2. Create a text (.txt) file on the USB flash drive in the root directory called [Link]
(case-sensitive).
3. In the [Link] file, write satask snap and save the file.
4. Put the USB into one of the USB ports on the rear of the canister and wait for a short time.
The fault LED on the node canister flashes when the USB service action is being
completed. When the fault LED stops flashing, it is safe to remove the USB flash drive.
5. Unplug the USB drive from the system and plug it into your workstation. If the procedure
was successful, the [Link] file was deleted and you have a satask_result.html file
and a single snap from the node canister. This snap can be uploaded to the IBM Support
Center, as shown in 13.10.4, “Uploading files to the IBM Support Center” on page 836.

Note: If there was a problem with the procedure, the html file still is generated, and
reasons why the procedure did not work are listed in it.

13.10.4 Uploading files to the IBM Support Center


If you chose to not have the system upload the support package automatically, it can still be
uploaded for analysis from the Enhanced Customer Data Repository (ECuRep). Any uploads
should be associated with a specific PMR. The PMR is also known as a service request and is
a mandatory requirement when uploading.

836 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
To upload the information, complete the following steps:
1. Using a web browser, go to Enhanced Customer Data Repository (ECuRep) (see
Figure 13-80).

Figure 13-80 ECuRep details

2. Complete the required fields:


– The PMR number that is provided by IBM Support for your specific case. This number
uses the format of ppppp,bbb,ccc; for example, 12345,000,866, which uses a comma (,)
as a separator.
– Upload is for: Select Hardware from the drop-down menu.
Although the Email address field is not mandatory, it is a best practice to enter your email
address so that you are automatically notified of a successful or unsuccessful upload.
3. When the form is completed, click Continue to open the input window (see Figure 13-81).

Figure 13-81 ECuRep File upload

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 837
4. Select one or more files, click Upload to continue, and follow the directions.

13.11 Service Assistant Tool


The SAT is a web-based GUI that is used to service individual node canisters, primarily when
a node has a fault and is in a service state. A node is not an active part of a clustered system
while it is in service state.

Typically, the system is configured with the following IP addresses:


򐂰 One service IP address for each of control canisters.
򐂰 One cluster management IP address, which is set when the cluster is created.

The SAT is available even when the management GUI is not accessible. The following
information and tasks can be accomplished with the SAT:
򐂰 Status information about the connections and the node canister
򐂰 Basic configuration information, such as configuring IP addresses
򐂰 Service tasks, such as restarting the Common Information Model Object Manager
(CIMOM) and updating the WWNN
򐂰 Details about node error codes
򐂰 Details about the hardware, such as IP addresses and Media Access Control (MAC)
addresses

The SAT GUI is available by using a service assistant IP address that is configured on each
IBM FlashSystem node. It can also be accessed through the cluster IP addresses by
appending /service to the cluster management IP.

It is also possible to access the SAT GUI of the config node if you enter the URL of the service
IP address of the config node into any web browser and click Service Assistant Tool (see
Figure 13-82 on page 839).

838 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-82 Service Assistant Tool

If the clustered system is down, the only method of communicating with the node canisters is
through the SAT IP address directly. Each node can have a single service IP address on
Ethernet port 1, which should be configured on all nodes of the cluster.

To open the SAT GUI, enter one of the following URLs into a web browser:
򐂰 http(s)://<cluster IP address of your cluster>/service
򐂰 http(s)://<service IP address of a node>/service
򐂰 http(s)://<service IP address of config node> and click Service Assistant Tool.

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 839
To access the SAT, complete the following steps:
1. If you are accessing SAT by using cluster IP address/service, the configuration node
canister SAT GUI login window opens. Enter the Superuser Password, as shown in
Figure 13-83.

Figure 13-83 Service Assistant Tool Login GUI

2. After you are logged in, you see the Service Assistant Home window, as shown in
Figure 13-84. The SAT can view the status and run service actions on other nodes in
addition to the node to which the user is logged in.

Figure 13-84 Service Assistant Tool GUI

840 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. The current node canister is displayed in the upper left corner of the GUI. As shown in
Figure 13-84 on page 840, this is node2. Select the node that you want in the Change
Node section of the window. You see the details in the upper left change to reflect the
selected node canister.

Note: The SAT GUI provides access to service procedures and shows the status of the
node canisters. These procedures should be carried out only if you are directed to do so by
IBM Support.

For more information about how to use the SAT, see IBM Knowledge Center.

13.12 IBM Storage Insights monitoring


With IBM Storage Insights, you can optimize your storage infrastructure by using this
cloud-based storage management and support platform with predictive analytics.

The monitoring capabilities that IBM Storage Insights provides are useful for things like
capacity planning, workload optimization, and managing support tickets for ongoing issues.

After you add your systems to IBM Storage Insights, you see the Dashboard, where you can
select a system that you want to see the overview for, as shown in Figure 13-85.

Figure 13-85 IBM Storage Insights System overview

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 841
Component health is shown at the upper center of the window. If there is a problem with one
of the Hardware, Logical or Connectivity components, errors are shown here, as shown in
Figure 13-86.

Figure 13-86 Component Health overview

The error entries can be expanded to obtain more details by selecting the three dots at the
upper right corner of the component that has an error and then selecting View Details. The
relevant part of the more detailed System View opens, and what you see depends on which
component has the error, as shown in Figure 13-87.

Figure 13-87 Ports in error

From here, it is now obvious which components have the problem and exactly what is wrong
with them, so now you can log a support ticket with IBM if necessary.

13.12.1 Capacity monitoring


You can see key statistics such as Physical Capacity, Volume Capacity, and Capacity
Savings, depending on what is configured after you select a system, as shown in
Figure 13-88.

Figure 13-88 Capacity area of the IBM Storage Insights system overview

842 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
In the Capacity view, the user can click View Pools, View Compress Volumes, View
Deduplicated Volumes, and View Thin-Provisioned Volumes. Clicking any of these items
takes the user to the detailed system view for the selection option. From there, you can click
Capacity to get a historical view of how the system capacity changed over time, as shown in
Figure 13-89. At any time, the user can select the timescale, resources, and metrics to be
displayed on the graph by clicking any options around the graph.

Add metrics

Figure 13-89 IBM Storage Insights Capacity view

If you scroll down below the graph, you find a list view of the selected option. In this example,
we selected View Pools so the configured pools are shown with the relevant key capacity
metrics, as shown in Figure 13-90. Double-clicking a pool in the table display the properties
for it.

Figure 13-90 Pools list view

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 843
13.12.2 Performance monitoring
From the system overview, you can scroll down and see the three key performance statistics
for your system, as shown in Figure 13-91. For the Performance overview, these statistics are
aggregated across the whole system, and you cannot drill down by Pool, Volume, or other
items.

Figure 13-91 System overview: Performance

To view more detailed performance statistics, enter the system view again, as described in
13.12.1, “Capacity monitoring” on page 842.

For this performance example, we select View Pools, and then select Performance from the
System View pane, as shown in Figure 13-92.

Figure 13-92 IBM Storage Insights: Performance view

It is possible to customize what can be seen on the graph by selecting the metrics and
resources. In Figure 13-93 on page 845, the Overall Response Time for one pool over a
12-hour period is displayed.

844 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Figure 13-93 Filtered performance graph

Scrolling down the graph, the Performance List view is visible, as shown in Figure 13-94.
Metrics can be selected by clicking the filter button at the right of the column headers. If you
select a row, the graph is filtered for that selection only. Multiple rows can be selected by
holding down the Shift or Ctrl keys.

Figure 13-94 Performance List View

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 845
13.12.3 Logging support tickets by using IBM Storage Insights
With IBM Storage Insights, you can log existing support tickets that greatly complement the
enhanced monitoring opportunities that the software provides. When an issue is detected and
you want to engage IBM Support, complete the following steps:
1. Select the system to open the System Overview and click Get Support, as shown in
Figure 13-95.

Figure 13-95 Get Support

846 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A window opens where you can create a ticket or update an existing ticket, as shown in
Figure 13-96.

Figure 13-96 Get Support window

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 847
2. Select Create Ticket, and the ticket creation wizard opens. Details of the system are
automatically populated, including the customer number, as shown in Figure 13-97. Select
Next.

Figure 13-97 Create Ticket wizard

848 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
3. You can enter relevant details about your problem to the ticket, as shown in Figure 13-98.
It is also possible to attach images or files to the ticket, such as PuTTY logs and screen
captures. Once done, select Next.

Figure 13-98 Add a note or attachment window

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 849
4. You can select a severity for the ticket. Examples of what severity you should select are
shown in Figure 13-99. Because in our example there are storage ports offline with no
impact, we select severity 2 because we lost redundancy.

Figure 13-99 Selecting a Severity Level window

850 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. Choose whether this is a hardware or a software problem. Select the relevant option (for
this example, the offline ports are likely caused by a physical layer hardware problem).
Once done, click Next.
6. Review the details of the ticket that will be logged with IBM, as shown in Figure 13-100.
Contact details must be entered so that IBM Support can respond to the correct person.
You also must choose which type of logs should be attached to the ticket. For more
information about the types of snap, see Table 13-13 on page 833.

Figure 13-100 Review the ticket window

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 851
7. Once done, select Create Ticket. A confirmation window opens, as shown in
Figure 13-101, and IBM Storage Insights automatically uploads the snap to the ticket
when it is collected.

Figure 13-101 Ticket Creation confirmation window

852 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
13.12.4 Managing existing support tickets by using IBM Storage Insights and
uploading logs
With IBM Storage Insights, you can track existing support tickets and upload logs to them. To
do so, complete the following steps:
1. From the System Overview window, select Tickets, as shown in Figure 13-102.

Figure 13-102 View Tickets

In this window, you see a large history of support tickets that were logged through
IBM Storage Insights for the system. Tickets that are not currently open are listed under
Closed Tickets, and currently open tickets are listed under Open Tickets.
2. To quickly add logs to a ticket without having to browse to the system GUI or use
IBM ECuRep, click Add Log Package to Ticket. A window opens that guides you through
the process, as shown in Figure 13-103. You can select which type of log package you
want and add a note to the ticket with the logs.

Figure 13-103 Adding a log package to the ticket

Chapter 13. Reliability, availability, and serviceability, monitoring and logging, and troubleshooting 853
3. After clicking Update Ticket, a confirmation opens, as shown in Figure 13-104. You can
exit the wizard. IBM Storage Insights runs in the background to gather the logs and upload
them to the ticket.

Figure 13-104 Confirmation of log upload

854 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
A

Appendix A. Performance data and statistics


gathering
This appendix provides a brief overview of the performance analysis capabilities of the
IBM Storage System and IBM Spectrum Virtualize V8.3. It also describes a method that you
can use to collect and process IBM Spectrum Virtualize performance statistics.

It is beyond the intended scope of this book to provide an in-depth understanding of


performance statistics or to explain how to interpret them. For more information about the
performance of the IBM Storage Systems, see IBM System Storage SAN Volume Controller,
IBM Storwize V7000, and IBM FlashSystem 7200 Best Practices and Performance
Guidelines, SG24-7521.

This appendix includes the following topics:


򐂰 “IBM Storage System performance overview” on page 856
򐂰 “Performance monitoring” on page 858

© Copyright IBM Corp. 2020. All rights reserved. 855


IBM Storage System performance overview
Storage virtualization with IBM Spectrum Virtualize provides many administrative benefits. In
addition, it can provide an increase in performance for some workloads. The caching
capability of IBM Spectrum Virtualize and its ability to stripe volumes across multiple disk
arrays can provide a performance improvement over what can otherwise be achieved when
midrange disk subsystems are used.

To ensure that the performance levels of your system are maintained, monitor performance
periodically to provide visibility into potential problems that exist or are developing so that they
can be addressed in a timely manner.

Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to, dissimilar workloads that are
competing for the same resources, overloaded resources, insufficient available resources,
poor performing resources, and similar performance constraints.

Remember the following high-level rules when you are designing your storage area network
(SAN) and IBM Spectrum Virtualize layout:
򐂰 Host-to-system inter-switch link (ISL) oversubscription.
This area is the most significant input/output (I/O) load across ISLs. A best practice is to
maintain a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to
lead to I/O bottlenecks. This best practice also assumes a core-edge design, where the
hosts are on the edges and the IBM FlashSystem is the core.
򐂰 Storage-to-system ISL oversubscription.
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this best practice
assumes a multiple-switch SAN fabric design.
򐂰 Node-to-node ISL oversubscription.
This area does not apply to IBM FlashSystem clusters composed of a unique control
enclosure. This area is the least significant load of the three possible oversubscription
bottlenecks. In standard setups, this load can be ignored. Although this area is not entirely
negligible, it does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available since Version 6.3 (now called IBM HyperSwap).
When the system is running in this manner, the number of ISL links becomes more
important. As with the storage-to-system ISL oversubscription, this load also has a
maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you
determine the number of ISLs to implement. If you need assistance, contact your IBM
representative.
򐂰 ISL trunking or port channeling.
For the best performance and availability, use ISL trunking or port channeling.
Independent ISL links can easily become overloaded and turn into performance
bottlenecks. Bonded or trunked ISLs automatically share load and provide better
redundancy in a failure.

856 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
򐂰 Number of paths per host multipath device.
The maximum supported number of paths per multipath device that is visible on the host is
eight. Although the IBM Subsystem Device Driver Path Control Module (SDDPCM),
related products, and most vendor multipathing software can support more paths, the IBM
Storage System expects a maximum of eight paths. In general, you see only an effect on
performance from more paths than eight. Although IBM Spectrum Virtualize can work with
more than eight paths, that configuration is unsupported.
򐂰 Do not intermix dissimilar array types or sizes.
Although IBM Spectrum Virtualize supports an intermixture of different types of storage
within storage pools, it is a best practice to always use the same array model, redundant
array of independent disks (RAID) mode. RAID size (RAID 5 6+P+S does not mix well with
RAID 6 14+2), and drive speeds.

Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide validation that design expectations are met, and identify opportunities for
improvement.

IBM Spectrum Virtualize performance perspectives


IBM Spectrum Virtualize Software was developed by the IBM Research Group. It runs on
commodity hardware (mass-produced Intel -based processors (CPUs) with mass-produced
expansion cards), and provides distributed cache and a scalable cluster architecture. One of
the main goals of its design is to use refreshes in hardware. Currently, the IBM Storage
System cluster is scalable up to eight nodes (four control enclosures).

The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Although virtualization provides
significant flexibility in terms of the components that are used, it does not diminish the
necessity of designing the system around the components so that it can deliver the level of
performance that you want.

The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the goal is that you always want to maximize the bandwidth that is available
to the IBM Storage System ports. An IBM FlashSystem system is one of the few devices that
can drive ports to their limits on average, so it is imperative that you put significant thought
into planning the SAN layout.

Essentially, performance improvements are gained by selecting the most appropriate internal
disk drive types, spreading the workload across a greater number of back-end resources
when using external storage, and adding more caching. These capabilities are provided by
the IBM Storage System cluster. However, the performance of individual resources eventually
becomes the limiting factor.

Appendix A. Performance data and statistics gathering 857


Performance monitoring
This section highlights several performance monitoring techniques.

Collecting performance statistics


IBM Spectrum Virtualize is constantly collecting performance statistics. The default frequency
by which files are created is 5-minute intervals. The collection interval can be changed by
running the startstats command.

The statistics files for volumes, managed disks (MDisks), nodes, and drives are saved at the
end of the sampling interval. A maximum of 16 files (each) are stored before they are overlaid
in a rotating log fashion. This design provides statistics for the most recent 240-minute period
if the default 15-minute sampling interval is used. IBM Spectrum Virtualize supports
user-defined sampling intervals of 1 - 60 minutes.

For each type of object (volumes, MDisks, nodes, and drives), a separate file with statistic
data is created at the end of each sampling period and stored in /dumps/iostats.

Run the startstats command to start the collection of statistics, as shown in Example A-1.

Example: A-1 The startstats command


IBM_Storwize:fab2shared1a:superuser>startstats -interval 2

This command starts statistics collection and gathers data at 2-minute intervals.

To verify the statistics collection interval, display the system properties again, as shown in
Example A-2.

Example: A-2 Statistics collection status and frequency


IBM_Storwize:fab2shared1a:superuser>lssystem
statistics_status on
statistics_frequency 2
-- The output has been shortened for easier reading. --

It is not possible to stop statistics collection with the command stopstats starting with
Version 8.1.

Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within IBM Spectrum Virtualize and IBM FlashSystem, they shorten
the amount of time that the historical data is available on IBM Spectrum Virtualize. For
example, rather than a 240-minute period of data with the default 15-minute interval, if you
adjust to 2-minute intervals, you have a 32-minute period instead.

Statistics are collected per node. The sampling of the internal performance counters is
coordinated across the cluster so that when a sample is taken, all nodes sample their internal
counters concurrently. Collect all files from all nodes for a complete analysis. Tools such as
IBM Spectrum Control perform this intensive data collection for you.

858 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Statistics file naming
The statistics files that are generated are written to the /dumps/iostats/ directory. The file
name has the following formats:
򐂰 Nm_stats_<node_id>_<date>_<time> for MDisks statistics
򐂰 Nv_stats_<node_id>_<date>_<time> for Volumes statistics
򐂰 Nn_stats_<node_id>_<date>_<time> for node statistics
򐂰 Nd_stats_<node_id>_<date>_<time> for drives statistics

The node_id is the name of the node on which the statistics were collected. The date is in the
form <yymmdd>, and the time is in the form <hhmmss>. The following example shows an MDisk
statistics file name:
Nm_stats_113986_161019_151832

Example A-3 shows typical MDisk, volume, node, and disk drive statistics file names.

Example: A-3 File names of per node statistics


IBM_Storwize:fab2shared1a:superuser>lsdumps -prefix /dumps/iostats
id filename
0 Nd_stats_7822DFF-2_181019_173808
1 Nn_stats_7822DFF-2_181019_173808
2 Nv_stats_7822DFF-2_181019_173808
3 Nm_stats_7822DFF-2_181019_173808
4 Nm_stats_7822DFF-2_181019_175308
5 Nv_stats_7822DFF-2_181019_175308
6 Nd_stats_7822DFF-2_181019_175308
7 Nn_stats_7822DFF-2_181019_175308
...
60 Nm_stats_7822DFF-2_181019_212314
61 Nn_stats_7822DFF-2_181019_212314
62 Nd_stats_7822DFF-2_181019_212314
63 Nv_stats_7822DFF-2_181019_212314

Tip: The performance statistics files can be copied from the IBM FlashSystem nodes to a
local drive on your workstation by using [Link] (included with PuTTY) from an MS-DOS
command prompt, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load fab2shared1a
superuser@[Link]:/dumps/iostats/* c:\statsfiles

Use the -load parameter to specify the session that is defined in PuTTY.

Specify the -unsafe parameter when you use wildcards.

You can obtain PuTTY from Download PuTTY.

Real-time performance monitoring


IBM Storage System supports real-time performance monitoring. Real-time performance
statistics provide short-term status information for the IBM FlashSystem system. The
statistics are shown as graphs in the management GUI, or can be viewed from the CLI.

Appendix A. Performance data and statistics gathering 859


With system-level statistics, you can quickly view the CPU usage and the bandwidth of
volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes
per second (MBps) or input/output operations per second (IOPS), and a view of bandwidth
over time.

Each node collects various performance statistics (mostly at 5-second intervals) and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node.

As with system statistics, node statistics help you to evaluate whether the node is operating
within normal performance metrics.

Real-time performance monitoring gathers the following system-level performance statistics:


򐂰 CPU utilization
򐂰 Port utilization and I/O rates
򐂰 Volume and MDisk I/O rates
򐂰 Bandwidth
򐂰 Latency

Real-time statistics are not a configurable option and cannot be disabled.

Real-time performance monitoring with the CLI


The lsnodecanisterstats and lssystemstats commands are available for monitoring the
statistics by using the CLI.

The lsnodecanisterstats command provides performance statistics for the nodes that are
part of a clustered system, as shown in Example A-4. The output is truncated and shows only
part of the available statistics. You can also specify a node name in the command to limit the
output for a specific node.

Example: A-4 The lsnodecanisterstats command output


IBM_Storage:ITSO:superuser>lsnodecanisterstats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 node1 compression_cpu_pc 14 14 181019214224
1 node1 cpu_pc 2 2 181019214224
1 node1 fc_mb 0 0 181019214224
1 node1 fc_io 5 5 181019214224
1 node1 sas_mb 122 158 181019213804
1 node1 sas_io 485 627 181019213804
1 node1 iscsi_mb 0 0 181019214224
1 node1 iscsi_io 0 0 181019214224
1 node1 write_cache_pc 13 13 181019214224
1 node1 total_cache_pc 79 80 181019213959
1 node1 vdisk_mb 0 0 181019214224
1 node1 vdisk_io 0 0 181019214224
1 node1 vdisk_ms 0 0 181019214224
1 node1 mdisk_mb 52 55 181019213934
1 node1 mdisk_io 205 217 181019213934
1 node1 mdisk_ms 9 10 181019214219
1 node1 drive_mb 123 158 181019213804
1 node1 drive_io 485 627 181019213804
...
2 node2 mdisk_w_ms 8 11 181019214131
2 node2 drive_r_mb 41 50 181019214041
2 node2 drive_r_io 166 242 181019214041

860 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
2 node2 drive_r_ms 8 14 181019214041
2 node2 drive_w_mb 254 271 181019214121
2 node2 drive_w_io 1011 1071 181019214121
2 node2 drive_w_ms 14 17 181019213851
2 node2 iplink_mb 0 0 181019214226
2 node2 iplink_io 0 0 181019214226
2 node2 iplink_comp_mb 0 0 181019214226
2 node2 cloud_up_mb 0 0 181019214226
2 node2 cloud_up_ms 0 0 181019214226
2 node2 cloud_down_mb 0 0 181019214226
2 node2 cloud_down_ms 0 0 181019214226
2 node2 iser_mb 0 0 181019214226
2 node2 iser_io 0 0 181019214226

Example A-4 on page 860 shows statistics for the two node members of system ITSO. For
each node, the following columns are displayed:
򐂰 stat_name: The name of the statistic field
򐂰 stat_current: The current value of the statistic field
򐂰 stat_peak: The peak value of the statistic field in the last 5 minutes
򐂰 stat_peak_time: The time that the peak occurred

The lsnodecanisterstats command can also be used with a node canister name or ID as an
argument. For example, you can enter the command lsnodecanisterstats node1 to display
the statistics of node name node1 only.

The lssystemstats command lists the same set of statistics that is listed with the
lsnodecanisterstats command, but represents all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
򐂰 Bandwidth: Sum of bandwidth of all nodes
򐂰 Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
򐂰 IOPS: Total IOPS of all nodes
򐂰 CPU percentage: Average CPU percentage of all nodes

Example A-5 shows the resulting output of the lssystemstats command.

Example: A-5 The lssystemstats command output


IBM_Storage:ITSO:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 14 14 181019215930
cpu_pc 2 2 181019215930
fc_mb 0 0 181019215930
fc_io 10 10 181019215930
sas_mb 726 778 181019215445
sas_io 2879 3080 181019215845
iscsi_mb 0 0 181019215930
iscsi_io 0 0 181019215930
write_cache_pc 13 14 181019215745
total_cache_pc 79 80 181019215925
vdisk_mb 0 0 181019215930
vdisk_io 0 0 181019215930
vdisk_ms 0 0 181019215930
mdisk_mb 1 2 181019215920

Appendix A. Performance data and statistics gathering 861


mdisk_io 2 4 181019215900
mdisk_ms 6 7 181019215720
drive_mb 727 778 181019215445
drive_io 2877 3075 181019215445
drive_ms 11 13 181019215855
vdisk_r_mb 0 0 181019215930
vdisk_r_io 0 0 181019215930
vdisk_r_ms 0 0 181019215930
vdisk_w_mb 0 0 181019215930
vdisk_w_io 0 0 181019215930
vdisk_w_ms 0 0 181019215930
mdisk_r_mb 0 0 181019215930
mdisk_r_io 0 0 181019215930
mdisk_r_ms 0 0 181019215930
mdisk_w_mb 1 2 181019215920
mdisk_w_io 2 4 181019215900
mdisk_w_ms 6 7 181019215720
drive_r_mb 18 34 181019215825
drive_r_io 77 140 181019215525
drive_r_ms 4 13 181019215545
drive_w_mb 708 752 181019215510
drive_w_io 2800 2971 181019215510
drive_w_ms 11 13 181019215855
power_w 374 384 181019215555
temp_c 24 24 181019215930
temp_f 75 75 181019215930
iplink_mb 0 0 181019215930
iplink_io 0 0 181019215930
iplink_comp_mb 0 0 181019215930
cloud_up_mb 0 0 181019215930
cloud_up_ms 0 0 181019215930
cloud_down_mb 0 0 181019215930
cloud_down_ms 0 0 181019215930
iser_mb 0 0 181019215930
iser_io 0 0 1810191215930

Table A-1 gives the descriptions of the different counters that are presented by the
lssystemstats and lsnodecanisterstats commands.

Table A-1 List of counters for the lssystemstats and lsnodecanisterstats commands
Value Description

compression_cpu_pc Displays the percentage of allocated CPU capacity that is used for
compression.

cpu_pc Displays the percentage of allocated CPU capacity that is used for the
system.

fc_mb Displays the total number of megabytes transferred per second for Fibre
Channel (FC) traffic on the system. This value includes host I/O and any
bandwidth that is used for communication within the system.

fc_io Displays the total I/O operations that are transferred per second for FC
traffic on the system. This value includes host I/O and any bandwidth that
is used for communication within the system.

862 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Value Description

sas_mb Displays the total number of megabytes transferred per second for
serial-attached Small Computer System Interface (SCSI) (SAS) traffic on
the system. This value includes host I/O and bandwidth that is used for
background RAID activity.

sas_io Displays the total I/O operations that are transferred per second for SAS
traffic on the system. This value includes host I/O and bandwidth that is
used for background RAID activity.

iscsi_mb Displays the total number of megabytes transferred per second for internet
Small Computer Systems Interface (iSCSI) traffic on the system.

iscsi_io Displays the total I/O operations that are transferred per second for iSCSI
traffic on the system.

write_cache_pc Displays the percentage of the write cache usage for the node.

total_cache_pc Displays the total percentage for both the write and read cache usage for
the node.

vdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.

vdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to volumes during the sample period.

vdisk_ms Displays the average amount of time in milliseconds (ms) that the system
takes to respond to read and write requests to volumes over the sample
period.

mdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to MDisks during the sample period.

mdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to MDisks during the sample period.

mdisk_ms Displays the average amount of time in milliseconds that the system takes
to respond to read and write requests to MDisks over the sample period.

drive_mb Displays the average number of megabytes transferred per second for
read and write operations to drives during the sample period.

drive_io Displays the average number of I/O operations that are transferred per
second for read and write operations to drives during the sample period.

drive_ms Displays the average amount of time in milliseconds that the system takes
to respond to read and write requests to drives over the sample period.

vdisk_w_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.

vdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to volumes during the sample period.

vdisk_w_ms Displays the average amount of time in milliseconds that the system takes
to respond to write requests to volumes over the sample period.

mdisk_w_mb Displays the average number of megabytes transferred per second for
write operations to MDisks during the sample period.

mdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to MDisks during the sample period.

Appendix A. Performance data and statistics gathering 863


Value Description

mdisk_w_ms Displays the average amount of time in milliseconds that the system takes
to respond to write requests to MDisks over the sample period.

drive_w_mb Displays the average number of megabytes transferred per second for
write operations to drives during the sample period.

drive_w_io Displays the average number of I/O operations that are transferred per
second for write operations to drives during the sample period.

drive_w_ms Displays the average amount of time in milliseconds that the system takes
to respond write requests to drives over the sample period.

vdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to volumes during the sample period.

vdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to volumes during the sample period.

vdisk_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to volumes over the sample period.

mdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to MDisks during the sample period.

mdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to MDisks during the sample period.

mdisk_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to MDisks over the sample period.

drive_r_mb Displays the average number of megabytes transferred per second for
read operations to drives during the sample period.

drive_r_io Displays the average number of I/O operations that are transferred per
second for read operations to drives during the sample period.

drive_r_ms Displays the average amount of time in milliseconds that the system takes
to respond to read requests to drives over the sample period.

iplink_mb The total number of megabytes transferred per second for Internet
Protocol (IP) replication traffic on the system. This value does not include
iSCSI host I/O operations.

iplink_comp_mb Displays the average number of compressed MBps over the IP replication
link during the sample period.

iplink_io The total I/O operations that are transferred per second for IP partnership
traffic on the system. This value does not include iSCSI host I/O
operations.

cloud_up_mb Displays the average number of megabits per second (Mbps) for upload
operations to a cloud account during the sample period.

cloud_up_ms Displays the average amount of time (in milliseconds) it takes for the
system to respond to upload requests to a cloud account during the
sample period.

cloud_down_mb Displays the average number of Mbps for download operations to a cloud
account during the sample period.

cloud_down_ms Displays the average amount of time (in milliseconds) that it takes for the
system to respond to download requests to a cloud account during the
sample period.

864 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Value Description

iser_mb Displays the total number of megabytes transferred per second for iSCSI
Extensions for Remote Direct Memory Access (RDMA) (iSER) traffic on
the system.

iser_io Displays the total I/O operations that are transferred per second for iSER
traffic on the system.

Real-time performance statistics monitoring with the GUI


The IBM Spectrum Virtualize Dashboard gives you performance at a glance by displaying
some information about the system. You can see the entire cluster (the system) performance
by selecting information such as bandwidth, response time, IOPS, or CPU utilization. You can
also display a Node Comparison by selecting the same information as for the cluster and then
switching the button, as shown in Figure A-1 and Figure A-2.

Figure A-1 IBM Spectrum Virtualize Dashboard displaying the System performance overview

Figure A-2 shows the display after switching the button.

Figure A-2 IBM Spectrum Virtualize Dashboard displaying the Nodes performance overview

You can also use real-time statistics to monitor CPU utilization, volume, interface, and the
MDisk bandwidth of your system and nodes. Each graph represents 5 minutes of collected
statistics and provides a means of assessing the overall performance of your system.

Appendix A. Performance data and statistics gathering 865


The real-time statistics are available from the IBM Spectrum Virtualize GUI. To open the
Performance Monitoring window, select Monitoring → Performance (as shown in
Figure A-3).

Figure A-3 Selecting the Performance menu in the Monitoring menu

As shown in Figure A-4, the Performance monitoring pane is divided into the following
sections that provide utilization views for the following resources.

Figure A-4 IBM Spectrum Virtualize Performance window

򐂰 CPU Utilization: The CPU Utilization graph shows the current percentage of CPU usage
and peaks in utilization. It can also display compression CPU usage for systems with
compressed volumes.
򐂰 Volumes: Shows four metrics about the overall volume utilization graphics:
– Read
– Write
– Read latency
– Write latency
򐂰 Interfaces: The Interfaces graph displays data points for FC, iSCSI, SAS, and IP Remote
Copy (RC) interfaces. You can use this information to help determine connectivity issues
that might affect performance:
– FC
– iSCSI

866 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
– SAS
– IP RC
򐂰 MDisks: Also shows four metrics on the overall MDisks graphics:
– Read
– Write
– Read latency
– Write latency

You can use these metrics to help determine the overall performance health of the volumes
and MDisks on your system. Consistent unexpected results can indicate errors in
configuration, system faults, or connectivity issues.

The system’s performance is also always visible at the bottom of the IBM Spectrum Virtualize
window.

Note: The indicated values in the graphics are averaged on a 1 second based sample.

You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-5.

Figure A-5 Viewing statistics per node or for the entire system

You can also change the metric between MBps or IOPS, as shown in Figure A-6.

Figure A-6 Viewing performance metrics by MBps or IOPS

Appendix A. Performance data and statistics gathering 867


On any of these views, you can select any point by using your cursor to know the exact value
and when it occurred. When you place your cursor over the timeline, it becomes a dotted line
with the various values gathered, as shown in Figure A-7.

Figure A-7 Viewing performance with details

For each of the resources, various metrics are available, and you can select which to be
displayed. For example, as shown in Figure A-8, from the four available metrics for the
MDisks view (Read, Write, Read latency, and Write latency), only Read and Write IOPS are
selected.

Figure A-8 Displaying performance counters

Performance data collection and IBM Spectrum Control


Although you can obtain performance statistics in standard .xml files, the use of .xml files is a
less practical and more complicated method to analyze the IBM Spectrum Virtualize
performance statistics. IBM Spectrum Control is the supported IBM tool to collect and analyze
IBM Storage Systems performance statistics.

IBM Spectrum Control is installed separately on a dedicated system, and is not part of the
IBM Spectrum Virtualize bundle.

868 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
For more information about using IBM Spectrum Control to monitor your storage subsystem,
see IT Infrastructure.

As an alternative to IBM Spectrum Control, a cloud-based tool that is called IBM Storage
Insights is available that provides a single dashboard that gives you a clear view of all your
IBM block storage by showing performance and capacity information. You do not have to
install this tool in your environment because it is a cloud-based solution. Only an agent is
required to collect data of the storage devices.

For more information about IBM Storage Insights, see IBM Storage Insights.

Appendix A. Performance data and statistics gathering 869


870 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
B

Appendix B. Command-line interface setup


This appendix describes access configuration to the command-line interface (CLI) by using
the local Secure Shell (SSH) authentication method.

© Copyright IBM Corp. 2020. All rights reserved. 871


CLI setup
The IBM Spectrum Virtualize system features a powerful CLI that offers more options and
flexibility as compared to the GUI. This appendix describes how to configure a management
system by using the SSH protocol to connect to the IBM Spectrum Virtualize system for
issuing commands by using the CLI.

For more information about the CLI, see IBM Knowledge Center.

Note: If a task completes in the GUI, the associated CLI command is always displayed in
the details, as shown throughout this book.

In the IBM Spectrum Virtualize GUI, authentication is performed by supplying a user name
and password. The CLI uses SSH to connect from a host to the IBM Spectrum Virtualize
system. A private and a public key pair or user name and password is necessary.

Using SSH keys with a passphrase is more secure than a login with a user name and
password because authenticating to a system requires the private key and the passphrase.
By using the other method, only the password is required to obtain access to the system.

When SSH keys are used without a passphrase, it becomes easier to log in to a system
because you must provide only the private key when performing the login and you are not
prompted for password. This option is less secure than using SSH keys with a passphrase.

To enable CLI access with SSH keys, complete the following steps:
1. Generate a public key and a private key as a pair.
2. Upload a public key to the IBM Spectrum Virtualize system by using the GUI.
3. Configure a client SSH tool to authenticate with the private key.
4. Establish a secure connection between the client and the system.

SSH is the communication vehicle between the management workstation and the IBM
Spectrum Virtualize system. The SSH client provides a secure environment from which to
connect to a remote machine. It uses the principles of public and private keys for
authentication.

SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the storage system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system.

Each key pair is associated with a user-defined ID string that can consist of up to 256
characters. Up to 100 keys can be stored on the system. New IDs and keys can be added,
and unwanted IDs and keys can be deleted. To use the CLI, an SSH client must be installed
on that system. To use the CLI with SSH keys, the SSH client is required. An SSH key pair
also must be generated on the client system, and the client’s SSH public key must be stored
on the IBM Spectrum Virtualize systems.

Basic setup on a Windows host


The SSH client on a Windows host that is used in this book is PuTTY. A PuTTY key generator
can also be used to generate the private and public key pair. The PuTTY client can be
downloaded at no cost from Download PuTTY.

872 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Download the following tools:
򐂰 PuTTY SSH client: [Link]
򐂰 PuTTY key generator: [Link]

Generating a public and private key pair


To generate a public and private key pair, complete the following steps:
1. Start the PuTTY key generator to generate the public and private key pair, as shown in
Figure B-1.

Figure B-1 PuTTY key generator

Select the following options:


– SSH-2 RSA
– Number of bits in a generated key: 1024

Note: Larger SSH keys, such as 2048 bits, are also supported.

Appendix B. Command-line interface setup 873


2. Click Generate and move the cursor over the blank area to generate keys (see
Figure B-2).

Figure B-2 Generating keys

To generate keys: The blank area that is indicated by the message is the large blank
rectangle in the GUI inside the Key field. Continue to move the mouse pointer over the
blank area until the progress bar reaches the far right. This action generates random
characters to create a unique key pair.

3. After the keys are generated, save them for later use. Click Save public key.
4. You are prompted to enter a name (for example, [Link]) and a location for the public
key (for example, C:\Keys\). Enter this information and click Save.
Ensure that you record the SSH public key name and location because this information
must be specified later.

Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key. For example, add the
extension .pub to the name of the file to easily differentiate the SSH public key from the
SSH private key.

5. Click Save private key. A warning message is displayed (see Figure B-3). Click Yes to
save the private key without a passphrase.

Figure B-3 Confirming the security warning

874 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Note: It is possible to use a passphrase for an SSH key. Although this action increases
security, it generates an extra step to log in with the SSH key because it requires the
passphrase input.

6. When prompted, enter a name (for example, [Link]), select a secure place as the
location, and click Save.

Key generator: The PuTTY key generator saves the PuTTY private key (PPK) with the
.ppk extension.

7. Close the PuTTY key generator.

Uploading the SSH public key to the IBM Storage System


After you create your SSH key pair, upload your SSH public key onto the IBM Storage
System. Complete the following steps:
1. Open the user section in the GUI, as shown in Figure B-4.

Figure B-4 Opening the user section

Appendix B. Command-line interface setup 875


2. Right-click the user name for which you want to upload the key and click Properties (see
Figure B-5).

Figure B-5 User properties

3. To upload the public key, click Browse, open the folder where you stored the public SSH
key, and select the key.
4. Click OK and the key is uploaded, as shown in Figure B-6.

Figure B-6 Confirmation of SSH Key upload

876 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
5. Check in the GUI to ensure that the SSH key is imported successfully (see Figure B-7).

Figure B-7 Key successfully imported

Configuring the SSH client


Before the CLI can be used, the SSH client must be configured. Complete the following steps:
1. Start PuTTY. The PuTTY Configuration window opens (see Figure B-8).

Figure B-8 PuTTY Configuration

2. In the upper right pane, select SSH as the connection type. In the “Close window on exit”
section, select Only on clean exit (see Figure B-9 on page 878), which ensures that if any
connection errors occur that they are displayed on the user’s window.

Appendix B. Command-line interface setup 877


3. In the Category pane, on the left side of the PuTTY Configuration window, select
Connection → Data, as shown on Figure B-9. In the “Auto-login username” field, enter
the IBM Spectrum Virtualize user ID that was used when uploading the public key. The
admin account was used in the example that is shown in Figure B-5 on page 876.

Figure B-9 PuTTY Auto-login user name

4. In the Category pane, on the left side of the PuTTY Configuration window (see
Figure B-10), select Connection → SSH to open the PuTTY SSH Configuration window.
In the SSH protocol version section, select 2.

Figure B-10 SSH protocol Version 2

5. In the Category pane on the left, select Connection → SSH → Auth. More options are
displayed for controlling SSH authentication.

878 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
6. In the “Private key file for authentication” field in Figure B-11, browse to or enter the fully
qualified directory path and file name of the SSH client private key file that was created (in
this example, C:\Users\Tools\Putty\[Link] is used).

Figure B-11 SSH authentication

7. In the Category pane, click Session to return to the “Basic options for your PuTTY
session” view.
8. Enter the following information in the fields in the right pane (see Figure B-12):
– Host Name (or Internet Protocol (IP) address): Specify the host name or system IP
address of the IBM Spectrum Virtualize system.
– Saved Sessions: Enter a session name.
9. Click Save to save the new session (Figure B-12).

Figure B-12 Session information

Appendix B. Command-line interface setup 879


[Link] the new session and click Open to connect to the IBM Spectrum Virtualize system,
as shown in Figure B-13.

Figure B-13 Connecting to a system

[Link] a PuTTY Security Alert opens as shown in Figure B-14, confirm it by clicking Yes.

Figure B-14 Confirming the security alert

880 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
[Link] shown in Figure B-15, PuTTY now connects to the system automatically by using the
user ID that was specified earlier, without prompting for password.

System

Figure B-15 PuTTY login

The CLI is now configured for IBM Spectrum Virtualize system administration.

Basic setup on an UNIX or Linux host


The OpenSSH client is the most common tool that is used on Linux or UNIX operating
systems (OSs). It is installed by default on most of these types of OSs. If OpenSSH is not
installed on your system, download it from OpenSSH: Portable Release.

The OpenSSH suite consists of various tools. The following tools are used to generate the
SSH keys, transfer the SSH keys to a remote system, and establish a connection to IBM
Spectrum Virtualize device by using SSH:
򐂰 ssh: OpenSSH SSH client
򐂰 ssh-keygen: Tool to generate SSH keys
򐂰 scp: Tool to transfer files between hosts

Generating a public and private key pair


To generate a public and a private key to connect to an IBM Spectrum Virtualize system
without entering the user password, run the ssh-keygen tool, as shown in Example B-1.

Example: B-1 SSH keys generation with ssh-keygen


# ssh-keygen -t rsa -b 1024
Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa): /.ssh/sshkey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /.ssh/sshkey.
Your public key has been saved in /.ssh/[Link].
The key fingerprint is:
[Link] root@[Link]
The key's randomart image is:
+--[ RSA 1024]----+
| .+=BO*|
| + oB*+|
| . oo+o |
| . . . E|
| S . o|
| . |
| |
| |
| |
+-----------------+
#

Appendix B. Command-line interface setup 881


In ssh-keygen, the parameter -t refers to the type of SSH key (RSA in Example B-1 on
page 881) and -b is the size of SSH key in bits (in Example B-1 on page 881, 1024 bits was
used).

You also must specify the path and name for the SSH keys. The name that you provide is the
name of the private key. The public key has the same name, but with extension .pub. In
Example B-1 on page 881, the path is /.ssh/, the name of the private key is sshkey, and the
name of the public key is [Link].

Note: Using a passphrase for the SSH key is optional. If a passphrase is used, security is
increased, but more steps are required to log in with the SSH key because the user must
enter the passphrase.

Uploading the SSH public key to the IBM Storage System


How to upload the new SSH public key to IBM Spectrum Virtualize by using the GUI is
described in “Uploading the SSH public key to the IBM Storage System” on page 875.

To upload the public key by using the CLI, complete the following steps:
1. On the SSH client (for example, AIX or Linux host), run scp to copy the public key to the
IBM Storage System. The basic syntax for the command is:
scp <file> <user>@<hostname_or_IP_address>:<path>
The directory /tmp in the IBM Spectrum Virtualize active configuration node can be used
to store the public key temporarily. Example B-2 shows the command to copy the newly
generated public key to the IBM Spectrum Virtualize system.

Example: B-2 SSH public key copy to IBM Storage System


# scp /.ssh/[Link] admin@[Link]:/tmp/
Password:
[Link]
100% 241 0.2KB/s 00:00
#

2. Log in to the storage system by using SSH and run the chuser command (as shown in
Example B-3) to import the public SSH key to a user.

Example: B-3 Importing the SSH public key to a user


IBM_Storage System:ITSO:admin>chuser -keyfile /tmp/[Link] admin
IBM_Storage System:ITSO:admin>lsuser admin
id 4
name admin
password yes
ssh_key yes
remote no
usergrp_id 1
usergrp_name Administrator
IBM_Storage System:ITSO:admin>

When running the lsuser command as shown in Example B-3, it is indicated that a user
has a configured SSH key in the field ssh_key.

882 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Connecting to an IBM Spectrum Virtualize system
Now that the SSH key is uploaded to the IBM Spectrum Virtualize system and assigned to a
user account, you can connect to the device by running the ssh command with the following
options:
ssh -i <SSH_private_key> <user>@<IP_address_or_hostname>

Example B-4 shows the SSH command that is running from an AIX server and connecting to
the storage system with an SSH private key and no password prompt.

Example: B-4 Connecting to IBM Storage System with an SSH private key
# ssh -i /.ssh/sshkey admin@[Link]
IBM_Storage System:ITSO:admin>

Appendix B. Command-line interface setup 883


884 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
C

Appendix C. Terminology
This appendix summarizes the IBM Spectrum Virtualize and IBM Storage terms that are
commonly used in this book.

For more information about the complete set of terms that relate to the IBM Storage Systems,
see IBM Knowledge Center.

© Copyright IBM Corp. 2020. All rights reserved. 885


Commonly encountered terms
This book uses the common IBM Spectrum Virtualize and IBM Storage terminology that is
listed in this section.

Access mode
One of the modes in which a logical unit (LU) in a disk controller system can operate. The
three access modes are image mode, managed space mode, and unconfigured mode. See
also “Image mode” on page 897, “Managed mode” on page 900, and “Virtualized storage” on
page 910.

Array
An ordered collection, or group, of physical devices (disk drive modules) that are used to
define logical volumes or devices. An array is a group of drives that is designated to be
managed with a redundant array of independent disks (RAID).

Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 909.

Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 909.

Automatic data placement mode


Automatic data placement mode is an Easy Tier operating mode in which the host activity on
all the volume extents in a pool are “measured,” a migration plan is created, and then
automatic extent migration is performed.

Auxiliary volume
The auxiliary volume that contains a mirror of the data on the master volume. See also “Metro
Global Mirror” on page 901, and “Relationship” on page 906.

Available (usable) capacity


See “Capacity” on page 887.

Back end
See “Front end and back end” on page 895.

Caching input/output group


The caching input/output (I/O group) is the I/O group in the system that performs the cache
function for a volume.

886 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.

Canister
A canister is a single processing unit within a storage system.

Capacity
IBM applies the following definitions to capacity:
򐂰 Available capacity
The amount of usable capacity that is not yet used in a system, pool, array, or MDisk.
򐂰 Capacity
The amount of data that can be contained on a storage medium.
򐂰 Data reduction
A set of techniques that can be used to reduce the amount of usable capacity that is
required to store data. Examples of data reduction include data deduplication and
compression.
򐂰 Data reduction savings
The total amount of usable capacity that is saved in a system, pool, or volume through the
application of an algorithm, such as compression or deduplication on the written data. This
saved capacity is the difference between the written capacity and the used capacity.
򐂰 Effective capacity
The amount of provisioned capacity that can be created in a system or pool without
running out of usable capacity given the current data reduction savings being achieved.
This capacity equals the usable capacity that is divided by the data reduction savings
percentage.
򐂰 Overhead capacity
An amount of usable capacity that is occupied by metadata in a system or pool and other
data that is used for system operations.
򐂰 Overprovisioned
The result of creating more provisioned capacity in a storage system or pool than there is
usable capacity. Overprovisioning occurs when thin provisioning or data reduction
techniques ensure that the used capacity of the provisioned volumes is less than their
provisioned capacity.
򐂰 Overprovisioned ratio
The ratio of provisioned capacity to usable capacity in the pool or system.
򐂰 Provisioned capacity
Total capacity of all volumes and Volume copies in a pool or system.
򐂰 Provisioning limit - maximum provisioned capacity - overprovisioning limit
In some storage systems, restrictions in the storage hardware or configured by the user
define the limit of the maximum provisioned capacity in a pool or system.
򐂰 Raw capacity
The reported capacity of the drives in the system before formatting or RAID is applied.

Appendix C. Terminology 887


򐂰 Standard provisioning
The ability to completely use a volume's capacity for that specific volume.
򐂰 Standard provisioned volume
A volume that uses all the storage at creation.
򐂰 Thin-provisioning savings
The total amount of usable capacity that is saved in a pool, system, or volume by using
usable capacity when needed as a result of write operations. The capacity that is saved is
the difference between the provisioned capacity minus the written capacity.
򐂰 Total capacity savings
The total amount of usable capacity that is saved in a pool, system, or volume through
thin-provisioning and data reduction techniques. The capacity that is saved is the
difference between the used usable capacity and the provisioned capacity.
򐂰 Usable capacity
The amount of capacity that is provided for storing data on a system, pool, array, or MDisk
after formatting and RAID techniques are applied. Usable capacity is the total of used and
available capacity. For example, 50 TiB used, 50 TiB available is a usable capacity of
100 TiB.
򐂰 Used capacity
The amount of usable capacity that is taken up by data or overhead capacity in a system,
pool, array, or MDisk after data reduction techniques are applied.
򐂰 Written capacity
The amount of usable capacity that might be used to store written data in a pool or system
before data reduction is applied.
򐂰 Written capacity limit
The largest amount of capacity that can be written to a drive, array, or MDisk. The limit can
be reached even when usable capacity is still available.

Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are IBM FlashCopy, Metro Mirror (MM), Global Mirror (GM), and
virtualization. See also “FlashCopy” on page 894, “Metro Mirror” on page 901, and
“Virtualized storage” on page 910.

Capacity recycling
Capacity recycling means the amount of provisioned capacity that can be recovered without
causing stress or performance degradation. This capacity identifies the amount of resources
that can be reclaimed and provisioned to other objects in an environment.

Chain
A set of enclosures that is attached to provide redundant access to the drives inside the
enclosures. Each control enclosure can have one or more chains.

Challenge Handshake Authentication Protocol


Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that
protects against eavesdropping by encrypting the user name and password.

888 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Change volume
A volume that is used in GM that holds earlier consistent revisions of data when changes are
made.

Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.

Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Rather than being created directly from managed disks (MDisks), child
pools are created from existing capacity that is allocated to a parent pool. As with parent
pools, volumes can be created that specifically use the capacity that is allocated to the child
pool. Child pools are similar to parent pools with similar properties. Child pools can be used
for volume copy operation. See also “Parent pool” on page 902.

Cloud account
An agreement with a cloud service provider (CSP) to use storage or other services at that
service provider. Access to the cloud account is granted by presenting valid credentials.

Cloud container
A cloud container is a virtual object that includes all of the elements, components, or data that
is common to a specific application or data.

Cloud service provider


A cloud service provider (CSP) is the company or organization that provides off- and
on-premises cloud services, such as storage, server, and network. IBM Spectrum Virtualize
includes built-in software capabilities to interact with CSPs such as IBM Cloud, Amazon S3,
and deployments of OpenStack Swift.

Cloud tenant
A cloud tenant is a group or an instance that provides common access with the specific
privileges to an object, software, or data source.

Clustered system
A clustered system, which was known as a cluster, is a group of up to eight IBM Storage
Systems canisters (two in each system) that presents a single configuration, management,
and service interface to the user.

Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a flash drive. A cold extent also refers to an extent that must
be migrated onto an HDD if it is on a flash drive.

Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.

Appendix C. Terminology 889


Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.

Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.

Consistency group
A consistency group is a group of copy relationships between virtual volumes or data sets that
are maintained with the same time reference so that all copies are consistent in time. A
consistency group can be managed as a single entity.

Container
A container is a software object that holds or organizes other software objects or entities.

Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the contingency
capacity is the unused real capacity that is maintained. For thin-provisioned volumes that are
not configured to automatically expand, it is the difference between the used capacity and the
new real capacity.

Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete, and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.

Counterpart SAN
A counterpart SAN is the non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy.
IBM Storage canisters are typically connected to a “redundant SAN” that is made up of two
counterpart SANs. A counterpart SAN is often called a SAN fabric.

Cross-volume consistency
A consistency group property that ensures consistency between volumes when an
application issues dependent write operations that span multiple volumes.

Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to ensure the recoverability of applications.

Data deduplication
Data deduplication is a method of reducing storage needs by eliminating redundant data.
Only one instance of the data is retained on storage media. Other instances of the same data
are replaced with a pointer to the retained instance.

890 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Data encryption key
The data encryption key is used to encrypt data. It is created automatically when an
encrypted object, such as an array, a pool, or a child pool, is created. It is stored in secure
memory and it cannot be viewed or changed. The data encryption key is encrypted by using
the master access key.

Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.

Data reduction
Data reduction is a set of techniques that can be used to reduce the amount of physical
storage that is required to store data. An example of data reduction includes data
deduplication and compression. See also “Data Reduction Pool” and “Capacity” on page 887.

Data Reduction Pool


Data Reduction Pools (DRPs) are specific types of pools where more control over volumes
capacity is given to specific hosts (for example VMware vStorage APIs for Array Integration
(VAAI), vSphere APIs for Storage Awareness (VASA), VVOL, and Microsoft Offloaded Data
Transfer (ODX)). These hosts can return unused space for reuse. With standard pools, the
system is not aware of any unused space on host-allocated volumes. See also “Data
reduction”.

Data reduction savings


See “Capacity” on page 887.

Deduplication
See “Data deduplication” on page 890.

Dependent write operation


A write operation that must be applied in the correct order to maintain cross-volume
consistency.

Directed maintenance procedure


The fix procedures, which are also known as directed maintenance procedures (DMPs),
ensure that you fix any outstanding errors in the error log. To fix errors, from the Monitoring
window, click Events. The Next Recommended Action is displayed at the top of the Events
window. Select Run This Fix Procedure and follow the instructions.

Discovery
The automatic detection of a network topology change, for example, new and deleted nodes
or links.

Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the IBM Storage cluster likely
have different performance attributes because of the type of disk or RAID array on which they
are installed. The MDisks can be on 15,000 RPM Fibre Channel (FC) or serial-attached Small
Computer System Interface (SCSI) (SAS) disk, Nearline (NL) SAS, or Serial Advanced
Technology Attachment (SATA), or even flash drives. Therefore, a storage tier attribute is
assigned to each MDisk, and the default is generic_hdd.

Appendix C. Terminology 891


Distributed redundant array of independent disks
An alternative RAID scheme where the number of drives that are used to store the array can
be greater than the equivalent, typical RAID scheme. The same data stripes are distributed
across a greater number of drives, which increases the opportunity for parallel I/O and
improves the overall performance of array. See also “Rebuild area” on page 905.

Drive technology
A category of a drive that pertains to the method and reliability of the data storage techniques
being used on the drive. Possible values include enterprise (ENT) drive, NL drive, or
solid-state drive (SSD).

Dual Inline Memory Module


A Dual Inline Memory Module (DIMM) is a small circuit board with memory-integrated circuits
containing signal and power pins on both sides of the board.
򐂰 Channel: The memory modules are installed into matching banks, which are usually
color-coded on the planar. These separate channels enable the memory controller to
access each memory module. For the Intel Cascade Lake architecture, there are six
DIMM Memory channels per CPU, and each memory channel has two DIMMs. The
memory bandwidth is tied to each of these channels, and the speed of access for the
memory controller is shared across the pair of DIMMs in that channel.
򐂰 Slot: Generally, the physical slot that a DIMM can fit into, but in this context, a slot is
DIMM0 or DIMM1, which refers to the first or second slot within a channel on the planar.
There are two slots per memory channel on the IBM SAN Volume Controller SV2
hardware. On the planar, DIMM0 is the blue slot and DIMM1 is the black slot within each
channel.
򐂰 Rank: A single-rank DIMM has one set of memory chips that is accessed while writing to
or reading from the memory. A dual-rank DIMM is like having two single-rank DIMMs on
the same module, with only one rank accessible at a time. A quad-rank DIMM is,
effectively, two dual-rank DIMMs on the same module. The 32G DIMMS are dual rank.

Easy Tier
Easy Tier is a volume performance function within the IBM Storage family that provides
automatic data placement of a volume’s extents in a multitiered storage pool. The pool
normally contains a mix of flash drives and HDDs. Easy Tier measures host I/O activity on the
volume’s extents and migrates hot extents onto the flash drives to ensure the maximum
performance.

Effective capacity
See “Capacity” on page 887.

Encryption key
The encryption key, also known as master access key, is created and stored on USB flash
drives or on a key server when encryption is enabled. The master access key is used to
decrypt the data encryption key.

Encryption key manager / server


An internal or external system that receives and then serves existing encryption keys or
certificates to a storage system.

892 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Encryption recovery key
An encryption key that enables a method to recover from an encryption deadlock situation
where the normal encryption key servers are not available.

Encryption of data-at-rest
Encryption of data-at-rest is the inactive encryption data that is stored physically on the
storage system.

Evaluation mode
Evaluation mode is an Easy Tier operating mode in which the host activity on all the volume
extents in a pool are “measured” only. No automatic extent migration is performed.

Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.

Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service window. This
value is used to report error conditions to IBM and to provide an entry point into the service
guide.

Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
IBM Storage System. An event ID is used internally in the cluster to identify the error.

Excluded condition
The excluded condition is a status condition. It describes an MDisk that the IBM Storage
System decided is no longer sufficiently reliable to be managed by the cluster. The user must
issue a command to include the MDisk in the cluster-managed storage.

Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB.

External storage
External storage refers to MDisks that are SCSI LUs that are presented by storage systems
that are attached to and managed by the clustered system.

Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.

Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.

Feature activation code


An alphanumeric code that activates a licensed function on a product. See also “License key”
on page 900.

Appendix C. Terminology 893


Fibre Channel
Fibre Channel (FC) is a technology for transmitting data between computer devices. It is
especially suited for attaching computer servers to shared storage devices and for
interconnecting storage controllers and drives. See also “Zoning” on page 912.

Fibre Channel port logins


FC port logins refer to the number of hosts that can see any one V7000 port. The
IBM Storage System has a maximum limit per node port (N_Port) of FC logins that are
allowed.

Fibre Channel Arbitrated Loop


Fibre Channel Arbitrated Loop (FC-AL) is an implementation of the FC standards that uses a
ring topology for the communication fabric, as described in American National Standards
Institute (ANSI) INCITS 272-1996 (R2001). In this topology, two or more FC end points are
interconnected through a looped interface.

Fibre Channel connection


A Fibre Channel connection (FICON) is an FC communication protocol for IBM mainframe
computers and peripheral devices.

Fibre Channel over IP


Fibre Channel over IP (FCIP) is network storage technology that combines the features of the
Fibre Channel Protocol (FCP) and the IP to connect distributed SANs over large distances.

Fibre Channel Protocol


Fibre Channel Protocol (FCP) is the serial SCSI command protocol that is used on FC
networks.

Field-replaceable unit
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.

File Transfer Protocol


In TCP/IP, File Transfer Protocol (FTP) is an application layer protocol that uses TCP and
telnet services to transfer bulk-data files between machines or hosts.

Fix procedure
A maintenance procedure that runs within the product application and provides step-by-step
guidance to resolve an error condition.

FlashCopy
FlashCopy refers to a point-in-time (PiT) copy where a virtual copy of a volume is created.
The target volume maintains the contents of the volume at the PiT when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.

FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume that is
occupied by or reserved for a particular data set, data space, or file.

894 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
FlashCopy relationship
See “FlashCopy mapping” on page 894.

FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 903.

IBM FlashCore Module


The IBM FlashCore Module (FCM) is a family of high-performance flash drives. The FCM
design uses the Non-Volatile Memory Express (NVMe) protocol, a PCIe Gen3 interface, and
high-speed NAND memory to provide high throughput and input/output operations per
second (IOPS) and low latency. FCM drives are available in different capacities.
Hardware-based data compression and self-encryption are supported. The FCM drives are
accessible from the front of the enclosure.

Flash drive
A data storage device, which is typically removable and rewriteable, that uses solid-state
memory to store persistent data. See also “Flash module”.

Flash module
A modular hardware unit containing flash memory, one or more flash controllers, and
associated electronics. See also “Flash drive”.

Front end and back end


The IBM Storage System takes MDisks to create pools of capacity from which volumes are
created and presented to application servers (hosts). The volumes that are presented to the
hosts are in the front end of an IBM Storage System.

Full restore operation


A copy operation where a local volume is created by reading an entire a volume snapshot
from cloud storage.

Full snapshot
A type of volume snapshot that contains all the volume data. When a full snapshot is created,
an entire copy of the volume data is transmitted to the cloud.

General Parallel File System


General Parallel File System (GPFS) is a high-performance shared-disk file system that can
provide data access from nodes in a clustered system environment.

Gigabyte
A gigabyte (GB) is, for processor storage, real and virtual storage, and channel volume, two to
the power of 30 or 1,073,741,824 bytes. For disk storage capacity and communications
volume, it is 1,000,000,000 bytes.

Global Mirror
Global Mirror (GM) is a method of asynchronous replication that maintains data consistency
across multiple volumes within or across multiple systems. GM is used where distances
between the source site and target site cause increased latency beyond what the application
can accept.

Appendix C. Terminology 895


Global Mirror with Change Volumes
Change volumes are used to record changes to the primary and secondary volumes of a
Remote Copy (RC) relationship. A FlashCopy mapping exists between a primary and its
change volume, and a secondary and its change volume.

GPFS
See “General Parallel File System” on page 895.

GPFS cluster
A system of nodes that are defined as being available for use by GPFS file systems.

GPFS snapshot
A PiT copy of a file system or file set.

Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap
(64 kibibytes (KiB) or 256 KiB) in the IBM Storage System. A grain is also the unit to extend
the real size of a thin-provisioned volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).

Graphical user interface


A graphical user interface (GUI) is a computer interface that presents a visual metaphor of a
real-world scene, often of a desktop, by combining high-resolution graphics, pointing devices,
menu bars and other menus, overlapping windows, icons, and the object-action relationship.

Hop
One segment of a transmission path between adjacent nodes in a routed network.

Host
A physical or virtual computer system that hosts computer applications, with the host and the
applications using storage.

Host bus adapter


A host bus adapter (HBA) is an interface card that connects a server to the SAN environment
through its internal bus system, for example, PCI Express. Typically, it is referred to the FC
adapters.

Host cluster
A configured set of physical or virtual hosts that share one or more storage volumes to
increase scalability or availability of computer applications.

Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or internet Small
Computer Systems Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI
IDs are mapped to volumes separately. The intent is to have a one-to-one relationship
between hosts and host IDs, although this relationship cannot be policed.

Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. Host mapping is equivalent to LUN masking.

896 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Host object
A logical representation of a host within a storage system that is used to represent the host
for configuration tasks.

Host zone
A zone that is defined in the SAN fabric in which the hosts can address the system.

Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a flash drive.

Hot spare node (IBM SAN Volume Controller only)


A hot spare node is an online SAN Volume Controller node that is defined in a cluster but not
in any I/O group. During a failure of any online node in any I/O group of clusters, it is
automatically swapped with this spare node. After the recovery of an original node finishes,
the spare node returns to the standby spare status. This feature is not available for
IBM FlashSystem 7200.

IBM HyperSwap
Pertaining to a function that provides continuous, transparent availability against storage
errors and site failures, and is based on synchronous replication.

IBM Real-time Compression Appliance


IBM Real-time Compression Appliance® is an IBM integrated software function for storage
space efficiency. The RACE engine compresses data on volumes in real time with minimal
effect on performance.

IBM SAN Volume Controller


IBM SAN Volume Controller is an appliance that is designed for attachment to various host
computer systems. The SAN Volume Controller performs block-level virtualization of disk
storage. IBM Spectrum Virtualize is a software engine of SAN Volume Controller (and
IBM Storage System family) that performs block-level virtualization of disk storage.

IBM Remote Support Server and Client


IBM Remote Support Client is a software toolkit that is in IBM Storage System and opens a
secured tunnel to the IBM Remote Support Server. IBM Remote Support Server is in the IBM
network and collects key health check and troubleshooting information that is required by IBM
support personnel.

IBM Security Key Lifecycle Manager


IBM Security Key Lifecycle Manager (IBM SKLM) centralizes, simplifies, and automates the
encryption key management process to help minimize risk and reduce operational costs of
encryption key management.

Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume. See also
“Managed mode” on page 900 and “Virtualized storage” on page 910.

Appendix C. Terminology 897


Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
MDisk to the volume.

Incremental restore operation


A copy operation where a local volume is modified to match a volume snapshot by reading
from cloud storage only the parts of the volume snapshot that differ from the local volume.

Incremental snapshot
A type of volume snapshot where the changes to a local volume relative to the volume's
previous snapshot are stored on cloud storage.

Input/output group
A collection of volumes and canister relationships that present a common interface to host
systems. Each pair of canister is known as an input/output (I/O) group. An I/O group has a set
of volumes that are associated with it that are presented to host systems. Each V7000
canister is associated with exactly one I/O group. The canister in an I/O group provides a
failover and failback function for each other.

Input/output operations per second (IOPS)


A standard computing benchmark that is used to determine the best configuration settings for
servers.

Input/output throttling rate


The maximum rate at which an I/O transaction is accepted for a volume.

Internal storage
Internal storage refers to an array of MDisks and drives that are held in IBM Storage System
enclosures.

Internet Protocol
Internet Protocol (IP) is a protocol that routes data through a network or interconnected
networks. This protocol acts as an intermediary between the higher protocol layers and the
physical network.

Internet Small Computer Systems Interface


The internet Small Computer Systems Interface (iSCSI) is a protocol that is used by a host
system to manage iSCSI targets and iSCSI discovery. iSCSI initiators use the internet
Storage Name Service (iSNS) protocol to locate the appropriate storage resources.

Internet Storage Name Service


The iSNS Protocol that is used by a host system to manage iSCSI targets and the automated
iSCSI discovery, management, and configuration of iSCSI and FC devices. It was defined in
Request for Comments (RFC) 4171.

Inter-switch link hop


An inter-switch link (ISL) is a connection between two switches and counted as one ISL hop.
The number of hops is always counted on the shortest route between two N-ports (device
connections). In an IBM Storage System environment, the number of ISL hops is counted on
the shortest route between the pair of canisters that are farthest apart. The IBM Storage
System supports a maximum of three ISL hops.

898 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
iSCSI
See “Internet Small Computer Systems Interface”.

iSCSI alias
An alternative name for the iSCSI-attached host.

iSCSI initiator
An initiator functions as an iSCSI client. An initiator typically serves the same purpose to a
computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI
devices (such as HDDs and tape changers), an iSCSI initiator sends SCSI commands over
an IP network.

iSCSI name
A name that identifies an iSCSI target adapter or an iSCSI initiator adapter. An iSCSI name
can be an iSCSI Qualified Name (IQN) or an extended-unique identifier (EUI). Typically, this
identifier has the following format: [Link] domain.

iSCSI Qualified Name


iSCSI Qualified Name (IQN) refers to special names that identify both iSCSI initiators and
targets. IQN is one of the three name formats that is provided by iSCSI. The IQN format is
iqn.<yyyy-mm>.<reversed domain name>. For example, the default for an IBM Storage
System canister can be in the following format:
[Link].<clustername>.<nodename>

iSCSI session
The interaction (conversation) between an iSCSI Initiator and an iSCSI Target.

iSCSI target
An iSCSI target is a storage resource on an iSCSI server.

Just a bunch of disks


Just a bunch of disks (JBOD) is HDDs that are not configured according to the RAID system
to increase fault tolerance and improve data access performance.

Key server
A server that negotiates the values that determine the characteristics of a dynamic virtual
private network (VPN) connection that is established between two endpoints. See “Encryption
key manager / server” on page 892.

Latency
The time interval between the initiation of a send operation by a source task and the
completion of the matching receive operation by the target task. More generally, latency is the
time between a task initiating data transfer and the time that transfer is recognized as
complete at the data destination.

Least recently used


Least recently used (LRU) pertains to an algorithm that is used to identify and make available
the cache space that contains the data that was least recently used.

Appendix C. Terminology 899


Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.

License key
An alphanumeric code that activates a licensed function on a product.

License key file


A file that contains one or more licensed keys.

Lightweight Directory Access Protocol


Lightweight Directory Access Protocol (LDAP) is an open protocol that uses TCP/IP to
provide access to directories that support an X.500 model. It does not incur the resource
requirements of the more complex X.500 directory access protocol. For example, LDAP can
be used to locate people, organizations, and other resources in an internet or intranet
directory.

Local and remote fabric interconnect


The local fabric interconnect and the remote fabric interconnect are the SAN components that
are used to connect the local and remote fabrics. Depending on the distance between the two
fabrics, they can be single-mode optical fibers that are driven by long wave gigabit interface
converters (GBICs) or small form-factor pluggable (SFP), or more sophisticated components,
such as channel extenders or special SFP modules that are used to extend the distance
between SAN components.

Local fabric
The local fabric is composed of SAN components (switches, cables, and other components)
that connect the components (nodes, hosts, and switches) of the local cluster together.

Logical drive
See “Volume” on page 911.

Logical unit and logical unit number


The logical unit (LU) is defined by the SCSI standards as a LUN. LUN is an abbreviation for
an entity that exhibits disk-like behavior, such as a volume or an MDisk.

LUN masking
A process where a host object can detect more LUNs than it is intended to use, and the
device-driver software masks the LUNs that are not to be used by this host.

Machine signature
A string of characters that identifies a system. A machine signature might be required to
obtain a license key.

Managed disk
A managed disk (MDisk) is a SCSI disk that is presented by a RAID controller and managed
by IBM Storage Systems. The MDisk is not visible to host systems on the SAN.

Managed mode
An access mode that enables virtualization functions to be performed. See also “Image
mode” on page 897 and “Virtualized storage” on page 910.

900 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Management node
A node that is used for configuring, administering, and monitoring a system.

Master volume
In most cases, the volume that contains a production copy of the data and that an application
accesses. See also “Auxiliary volume” on page 886, and “Relationship” on page 906.

Maximum replication delay


Maximum replication delay is the number of seconds that MM or GM replication can delay a
write operation to a volume.

MDisk
See “Managed disk” on page 900.

MDisk group (storage pool)


See “Storage pool (MDisk group)” on page 908.

Megabytes per second


Megabytes per second (MBps) is a unit of data transfer rate equal to 1024 * 1024 bytes.

Metro Global Mirror


Metro Mirror Global (MGM) is a cascaded solution where MM synchronously copies data to
the target site. This MM target is the source volume for GM that asynchronously copies data
to a third site. This solution can provide disaster recovery (DR) with no data loss at GM
distances when the intermediate site does not participate in the disaster that occurs at the
production site.

Metro Mirror
Metro Mirror (MM) is a method of synchronous replication that maintains data consistency
across multiple volumes within the system. MM is used when the write latency that is caused
by the distance between the source site and target site is acceptable to application
performance.

Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the IBM Storage System as copy 0 and the secondary copy is
known within the IBM Storage System as copy 1.

N_Port ID Virtualization
N_Port ID Virtualization (NPIV) is an FC feature whereby multiple FC N_Port IDs can share a
single physical N_Port.

Nearline SAS drive


A drive that combines the high capacity data storage technology of a SATA drive with the
benefits of a SAS interface for improved connectivity.

Node
A single processing unit within a system. For redundancy, multiple nodes are typically
deployed to make up a system.

Appendix C. Terminology 901


Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and SAS expansion ports. Node canisters are recognized on IBM Storage System
products. In SAN Volume Controller, all these components are spread within the whole
system chassis, so node canisters in SAN Volume Controller are not considered, but the node
as a whole.

Node rescue
The process by which a node with no valid software is installed on its HDD can copy software
from another node that is connected to the same FC fabric.

Object-Based Access Groups


See “Ownership Groups”.

Object storage
Object storage is a general term that refers to the entity in which cloud object storage
organizes, manages, and stores units of storage or just objects.

Overprovisioned
See “Capacity” on page 887.

Overprovisioned ratio
See “Capacity” on page 887.

Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.

Ownership Groups
The Ownership Groups feature provides a method of implementing a multi-tenant solution on
the system. Ownership groups enable the allocation of storage resources to several
independent tenants with the assurance that one tenant cannot access resources that are
associated with another tenant. Ownership groups restrict access for users in the ownership
group to only those objects that are defined within that ownership group.

Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes. See also “Child pool” on page 889.

Partner node
The other node that is in the I/O group to which this node belongs.

902 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Partnership
In MM or GM operations, the relationship between two clustered systems. In a
clustered-system partnership, one system is defined as the local system and the other
system as the remote system.

Performance group
A collection of volumes that is assigned the same performance characteristics. See also
“Performance policy”.

Performance policy
A policy that specifies performance characteristics, for example quality of service (QoS). See
also “Pool”.

Point-in-time copy
A PiT copy is an instantaneous copy that the FlashCopy service makes of the source volume.
See also “FlashCopy service” on page 895.

Pool
See “Storage pool (MDisk group)” on page 908.

Pool pair
Two storage pools that are required to balance workload. Each storage pool is controlled by a
separate node.

Preferred node
When you create a volume, you can specify a preferred node. Many of the multipathing driver
implementations that the system supports use this information to direct I/O to the preferred
node. The other node in the I/O group is used only if the preferred node is not accessible. If
you do not specify a preferred node for a volume, the system selects the node in the I/O group
that has the fewest volumes to be the preferred node. After the preferred node is chosen, it
can be changed only when the volume is moved to a different I/O group. The management
GUI provides a wizard that moves volumes between I/O groups without disrupting host I/O
operations.

Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.

Primary volume
In a stand-alone MM or GM relationship, the target of write operations that are issued by the
host application. See also “Relationship” on page 906.

Priority flow control


Priority flow control (PFC) is a link-level flow control mechanism that is based on IEEE
standard 802.1Qbb. PFC operates on individual priorities. Instead of pausing all traffic on a
link, PFC is used to selectively pause traffic according to its class.

Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.

Appendix C. Terminology 903


Provisioned capacity
See “Capacity” on page 887.

Provisioning group
A provisioning group is an object that represents a set of MDisks that share physical
resources. Provisioning groups are used for capacity reporting and monitoring of
overprovisioned storage resources.

Public fabric
A public fabric is where you configure one SAN per fabric so that it is dedicated for host
attachment, storage system attachment, and RC operations. This SAN is referred to as a
public SAN. You can configure the public SAN to enable IBM Storage System node-to-node
communication also. You can optionally use the -localportfcmask parameter of the chsystem
command to constrain the node-to-node communication to use only the private SAN.

Qualifier
򐂰 A value that provides more information about a class, association, indication, method,
method parameter, instance, property, or reference.
򐂰 A modifier that makes a name unique.

Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.

Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and then the
last disk (index 2). The tie is broken by the node that locks them first.

Quota
The amount of disk space and number of files and directories that are assigned as upper
limits for a specified user, group of users, or file set.

RACE engine
The RACE engine compresses data on volumes in real time with minimal effect on
performance. See “Compression” on page 889 or “IBM Real-time Compression Appliance” on
page 897.

RAID
See “Redundant array of independent disks”.

RAID 0
A data striping technique, which is commonly called RAID Level 0 or RAID 0 because of its
similarity to common, RAID, data-mapping techniques. However, it includes no data
protection so, strictly speaking, the appellation RAID is a misnomer. RAID 0 is also known as
data striping.

904 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.

RAID 10
A collection of two or more physical drives that present to the host an image of one or more
drives. In the event of a physical device failure, the data can be read or regenerated from the
other drives in the RAID due to data redundancy.

RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.

RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks (VDisks) in the presence of two concurrent disk failures.

RAID controller
See “Node canister” on page 902

Raw capacity
See “Capacity” on page 887.

Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.
See also “Capacity” on page 887.

Redundant array of independent disks


RAID refers to two or more physical disk drives that are combined in an array in a certain way,
which incorporates a RAID level for failure protection or better performance. The most
common RAID levels are 0, 1, 5, 6, and 10. Some storage administrators refer to the RAID
group as traditional RAID (TRAID). For distributed redundant array of independent disks
(DRAID), see “Distributed redundant array of independent disks” on page 892.

Read-intensive drives
The read-intensive flash drives that are available on IBM Storwize V7000 Gen2, IBM Storwize
V5000 Gen2, and IBM SAN Volume Controller 2145-DH8, SV1, and 24F enclosures are one
Drive Write Per Day (DWPD) Read-Intensive drives.

Real capacity
The amount of storage that is allocated to a volume copy from a storage pool.

Rebuild area
Reserved capacity that is distributed across all drives in a RAID. If a drive in the array fails, the
lost array data is systematically restored into the reserved capacity, returning redundancy to
the array. The duration of the restoration process is minimized because all drive members
simultaneously participate in restoring the data. See also “Distributed redundant array of
independent disks” on page 892.

Appendix C. Terminology 905


Reclaimable (or reclaimed) capacity
Reclaimable data is the capacity that is no longer needed. Reclaimable capacity is created
when data is overwritten and the new data is stored in a new location, when data is marked as
unneeded by a host by using the SCSI UNMAP command, or when a volume is deleted.

Recovery key
See “Encryption recovery key” on page 893.

Redundant storage area network


A redundant SAN is a SAN configuration in which there is no single point of failure (SPOF).
Therefore, data traffic continues no matter what component fails. Connectivity between the
devices within the SAN is maintained (although possibly with degraded performance) when
an error occurs. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics). In this configuration, if one path of the
counterpart SAN is destroyed, the other counterpart SAN path keeps functioning. See also
“Counterpart SAN” on page 890.

Relationship
In MM or GM, a relationship is the association between a master volume and an auxiliary
volume. These volumes also have the attributes of a primary or secondary volume. See also
“Auxiliary volume” on page 886, “Master volume” on page 901, “Primary volume” on
page 903, and “Secondary volume” on page 907.

Reliability, availability, and serviceability


Reliability, availability, and serviceability (RAS) are a combination of design methodologies,
system policies, and intrinsic capabilities that, when taken together, balance improved
hardware availability with the costs that are required to achieve it.

Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.

Remote Copy
See “Global Mirror” on page 895, “Metro Mirror” on page 901, and “Metro Global Mirror” on
page 901.

Remote fabric
The remote fabric is composed of SAN components (switches, cables, and other
components) that connect the components (nodes, hosts, and switches) of the remote cluster
together. Significant distances can exist between the components in the local cluster and
those components in the remote cluster.

SCSI initiator
The Small Computer System Interface (SCSI) initiator is the system component that initiates
communications with attached targets.

SCSI target
A device that acts as a subordinate to a SCSI initiator and consists of a set of one or more
LUs, each with an assigned LUN. The LUs on the SCSI target are typically I/O devices.

906 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Secondary volume
Pertinent to RC, the volume in a relationship that contains a copy of data that is written by the
host application to the primary volume.

Secure Copy Protocol


Secure Copy Protocol (SCP) is the secure transfer of computer files between a local and a
remote host or between two remote hosts by using the Secure Shell (SSH) protocol.

Secure Sockets Layer certificate


Secure Sockets Layer (SSL) is the standard security technology for establishing an encrypted
link between a web server and a browser. This link ensures that all data that is passed
between the web server and browsers remain private. To create an SSL connection, a web
server requires an SSL certificate.

Sequential volume
A volume that uses extents from a single MDisk.

Serial-attached SCSI
Serial-attached SCSI (SAS) is a method that is used in accessing computer peripheral
devices that employs a serial (1 bit at a time) means of digital data transfer over thin cables.
The method is specified in the American National Standard Institute (ANSI) standard that is
called SAS. In the business enterprise, SAS is useful for access to mass storage devices,
external HDDs.

Service Location Protocol


The Service Location Protocol (SLP) is an internet service discovery protocol that enables
computers and other devices to find services in a local area network (LAN) without prior
configuration. It was defined in RFC 2608.

Simple Network Management Protocol


Simple Network Management Protocol (SNMP) is a set of protocols for monitoring systems
and devices in complex networks. Information about managed devices is defined and stored
in a Management Information Base (MIB).

Small Computer System Interface


Small Computer System Interface (SCSI) is an ANSI-standard electronic interface with which
PCs can communicate with peripheral hardware, such as disk drives, tape drives, CD-ROM
drives, printers, and scanners, faster and more flexibly than with previous interfaces.

Snapshot
A snapshot is an image backup type that consists of a PiT view of a volume.

Solid-state drive
A solid-state drive (SSD) or flash drive is a disk that is made from solid-state memory and
therefore has no moving parts. Most SSDs use NAND-based flash memory technology. It is
defined to the IBM Storage System as a disk tier generic_ssd.

Space efficient
See “Thin provisioning” on page 909.

Appendix C. Terminology 907


Space-efficient VDisk
See “Thin-provisioned volume” on page 909.

Spare
An extra storage component, such as a drive or tape, that is predesignated for use as a
replacement for a failed component.

Spare drive
A drive that is reserved in an array for rebuilding a failed drive in a RAID. If a drive fails in a
RAID, a spare drive from within that device adapter pair is selected to rebuild it.

Spare goal
The optimal number of spares that are needed to protect the drives in the array from failures.
The system logs a warning event when the number of spares that protect the array drops
below this number.

Space-efficient volume
See “Thin-provisioned volume” on page 909.

Stand-alone relationship
In FlashCopy, MM, and GM, relationships that do not belong to a consistency group and that
have a null consistency-group attribute.

Statesave
Binary data collection that is used for a problem determination by IBM service support.

Storage area network


A storage area network (SAN) is a dedicated storage network that is tailored to a specific
environment, which combines servers, systems, storage products, networking products,
software, and services.

Storage-class memory
Storage-class memory (SCM) is a type of NAND flash that includes a power source to ensure
that data is not lost due to a system crash or power failure. SCM treats non-volatile memory
as DRAM and includes it in the memory space of the server. Access to data in that space is
quicker than access to data in local, PCI-connected SSDs, direct-attached HDDs, or external
storage arrays. SCM read/write technology is up to 10 times faster than NAND flash drives
and is more durable.

SCM drives can be installed only in drives slots 21 - 24 in an IBM Storage System. The
highest capacity drive must be installed in the highest available drive slot.

Storage node
A component of a storage system that provides internal storage or a connection to one or
more external storage systems.

Storage pool (MDisk group)


A storage pool is a collection of storage capacity, which is made up of MDisks, that provides
the pool of storage capacity for a specific set of volumes. A storage pool can contain more
than one tier of disk, which is known as a multitier storage pool, which is a prerequisite of
Easy Tier automatic data placement.

908 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Striped
Pertaining to a volume that is created from multiple MDisks that are in the storage pool.
Extents are allocated on the MDisks in the order that is specified.

Support Assistance
A function that is used to provide support personnel remote access to the system to perform
troubleshooting and maintenance tasks.

Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a RAID, is split into smaller chunks of storage that are known as extents. These extents are
then concatenated by using various policies to make volumes. See also “Asymmetric
virtualization” on page 886.

Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 886.

Syslog
A standard for transmitting and storing log messages from many sources to a centralized
location to enhance system management.

T10 DIF
T10 DIF is a Data Integrity Field (DIF) extension to SCSI to enable end-to-end protection of
data from a host application to physical media.

Tie-breaker
In a case of a cluster that is split in two groups of nodes, a tie-breaker is a role of a quorum
device that is used to decide which group continues to operate as the system and handle all
I/O requests.

Thin-provisioning savings
See “Capacity” on page 887.

Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.

Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity.

Throttles
Throttling is a mechanism to control the amount of resources that are used when the system
is processing I/Os on supported objects. The system supports throttles on hosts, host
clusters, volumes, copy offload operations, and storage pools. If a throttle limit is defined, the
system either processes the I/O for that object or delays the processing of the I/O to free
resources for more critical I/O operations.

Appendix C. Terminology 909


Throughput
A measure of the amount of information that is transmitted over a network in a period.
Throughput is measured in bits per second (bps), kilobits per second (Kbps), or megabits per
second (Mbps).

Transparent Cloud Tiering


Transparent Cloud Tiering (TCT) is a separately installable feature of IBM Spectrum Scale
that provides a native cloud storage tier.

Trial license
A temporary entitlement to use a licensed function.

Total capacity savings


See “Capacity” on page 887.

Unconfigured mode
An access mode in which an external storage MDisk is not configured in the system, so no
operations can be performed. See also “Image mode” on page 897 and “Managed mode” on
page 900.

Unique identifier
A unique identifier (UID) is an identifier that is assigned to storage system LUs when they are
created. It is used to identify the LU regardless of the LUN, the status of the LU, or whether
alternative paths exist to the same device. Typically, a UID is used only once.

VDisk
See “Virtual disk (VDisk)”.

VDisk-to-host mapping
See “Host mapping” on page 896.

Virtualized storage
Virtualized storage is physical storage that has virtualization techniques that are applied to it
by a virtualization engine.

Virtual capacity
The amount of storage that is available. In a thin-provisioned volume, the virtual capacity can
be different from the real capacity. In a standard volume, the virtual capacity and real capacity
are the same.

Virtual disk (VDisk)


See “Volume” on page 911

Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 888.

910 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Virtual local area network
Virtual local area network (VLAN) tagging separates network traffic at the layer 2 level for
Ethernet transport. The system supports VLAN configuration on both IPv4 and IPv6
connections.

Virtual storage area network


A virtual storage area network (VSAN) is a logical fabric entity that is defined within the SAN.
It can be defined on a single physical SAN switch or across multiple physical switched or
directors. In VMware terminology, the VSAN is defined as a logical layer of storage capacity
that is built from physical disk drives that are attached directly into the ESXi hosts. This
solution is not considered within the scope of this publication.

Vital product data


Vital product data (VPD) is information that uniquely defines system, hardware, software, and
microcode elements of a processing system.

Volume
A volume is an IBM Storage System logical device that appears to host systems that are
attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O group. A
volume has a preferred node within the I/O group.

Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.

Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or a Remote Copy relationship. In these cases,
the system fails to delete the volume unless the -force parameter is specified. Using the
-force parameter can lead to unintentional deletions of volumes that are still active. Active
means that the system detected recent I/O activity to the volume from any host.

Volume snapshot
A collection of objects on a cloud storage account that represents the data of a volume at a
particular time.

Worldwide ID
A worldwide ID (WWID) is a name identifier that is unique worldwide and that is represented
by a 64-bit value that includes the IEEE-assigned organizationally unique identifier (OUI).

Worldwide name
Worldwide name (WWN) is a 64-bit, unsigned name identifier that is unique.

Worldwide node name


Worldwide node name (WWNN) is a unique 64-bit identifier for a host containing an FC port.
See also “Worldwide port name” on page 912.

Appendix C. Terminology 911


Worldwide port name
Worldwide port name (WWPN) is a unique 64-bit identifier that is associated with an FC
adapter port. The WWPN is assigned in an implementation-independent and
protocol-independent manner. See also “Worldwide node name” on page 911.

Write-through mode
Write-through mode is a process in which data is written to a storage device while the data is
cached.

Written capacity
See “Capacity” on page 887.

Zoning
The grouping of multiple ports to form a virtual and private storage network. Ports that are
members of a zone can communicate with each other, but are isolated from ports in other
zones. See also “Fibre Channel” on page 894.

912 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Related publications

The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.

IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
򐂰 IBM FlashSystem 5000 Family Products, SG24-8449
򐂰 IBM FlashSystem 9100 Architecture, Performance, and Implementation, SG24-8425
򐂰 IBM FlashSystem 9200 and 9100 Best Practices and Performance Guidelines,
SG24-8448
򐂰 IBM System Storage SAN Volume Controller, IBM Storwize V7000, and IBM FlashSystem
7200 Best Practices and Performance Guidelines, SG24-7521
򐂰 Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and
V5030) with IBM Spectrum Virtualize V8.2.1, SG24-8162
򐂰 Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.2.1, SG24-7938
򐂰 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.2.1, SG24-7933
򐂰 Introduction and Implementation of Data Reduction Pools and Deduplication, SG24-8430

You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials, at the following website:
[Link]/redbooks

Help from IBM


IBM Support and downloads
[Link]/support

IBM Global Services


[Link]/services

© Copyright IBM Corp. 2020. All rights reserved. 913


914 Implementing IBM FlashSystem 9200, 9100, 7200, and 5100 Systems with IBM Spectrum Virtualize V8.3.1
Implementing IBM FlashSystem 9200, SG24-8466-00
9100, 7200, and 5100 Systems with ISBN 0738458910
IBM Spectrum Virtualize V8.3.1
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
Back cover

SG24-8466-00

ISBN 0738458910

Printed in U.S.A.

®
[Link]/redbooks

You might also like