SVC and V7000 Command Line Interface Users Guide PDF
SVC and V7000 Command Line Interface Users Guide PDF
Storwize V7000
Version 6.4.1
GC27-2287-04
Note
Before using this information and the product it supports, read the information in Notices on page 599.
This edition applies to IBM System Storage SAN Volume Controller, Version 6.4.0, and the IBM Storwize V7000,
Version 6.4.0, and to all subsequent releases and modifications until otherwise indicated in new editions.
This edition replaces GC27-2287-02.
Copyright IBM Corporation 2003, 2012.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Tables . . . . . . . . . . . . . . . ix Modifying the amount of available memory for
Copy Services and Volume Mirroring features using
About this guide . . . . . . . . . . . xi the CLI . . . . . . . . . . . . . . . . 24
Creating volumes using the CLI . . . . . . . 26
Who should use this guide . . . . . . . . . xi
Adding a copy to a volume using the CLI . . . . 28
Accessibility . . . . . . . . . . . . . . xi
Deleting a copy from a volume using the CLI . . . 29
Summary of changes for GC27-2287-04 SAN Volume
Configuring host objects using the CLI . . . . . 30
Controller Command-Line Interface User's Guide . . xi
Creating host mappings using the CLI . . . . . 31
Emphasis . . . . . . . . . . . . . . . xiii
Creating FlashCopy mappings using the CLI . . . 32
SAN Volume Controller library and related
Preparing and starting a FlashCopy mapping
publications . . . . . . . . . . . . . . xiii
using the CLI. . . . . . . . . . . . . 33
How to order IBM publications . . . . . . . xvi
Stopping FlashCopy mappings using the CLI . . 34
Sending your comments . . . . . . . . . . xvi
Deleting a FlashCopy mapping using the CLI . . 34
Syntax diagrams . . . . . . . . . . . . xvi
Creating a FlashCopy consistency group and adding
Terminology . . . . . . . . . . . . xviii
mappings using the CLI . . . . . . . . . . 35
CLI special characters . . . . . . . . . xviii
Preparing and starting a FlashCopy consistency
Using wildcards in the SAN Volume Controller
group using the CLI . . . . . . . . . . 36
CLI . . . . . . . . . . . . . . . . xix
Stopping a FlashCopy consistency group using
Data types and value ranges . . . . . . . xix
the CLI . . . . . . . . . . . . . . . 37
CLI commands and parameters . . . . . . xxiv
Deleting a FlashCopy consistency group using
CLI flags . . . . . . . . . . . . . xxiv
the CLI . . . . . . . . . . . . . . . 38
CLI messages . . . . . . . . . . . . xxv
Creating Metro Mirror or Global Mirror
Attributes of the -filtervalue parameters . . . xxv
relationships using the CLI . . . . . . . . . 38
Modifying Metro Mirror or Global Mirror
Chapter 1. Setting up an SSH client . . . 1 relationships using the CLI . . . . . . . . 39
Setting up an SSH client on a Windows host. . . . 2 Starting and stopping Metro Mirror or Global
Generating an SSH key pair using PuTTY . . . 2 Mirror relationships using the CLI. . . . . . 39
Configuring a PuTTY session for the CLI . . . . 3 Displaying the progress of Metro Mirror or
Connecting to the CLI using PuTTY . . . . . 4 Global Mirror relationships using the CLI . . . 40
Starting a PuTTY session for the CLI . . . . . 6 Switching Metro Mirror or Global Mirror
Preparing the SSH client on an AIX or Linux host . . 6 relationships using the CLI . . . . . . . . 40
Generating an SSH key pair using OpenSSH. . . 7 Deleting Metro Mirror and Global Mirror
Connecting to the CLI using OpenSSH . . . . 8 relationships using the CLI . . . . . . . . 41
Working with local and remote users . . . . . . 8 Creating Metro Mirror or Global Mirror consistency
groups using the CLI . . . . . . . . . . . 41
Chapter 2. Copying the SAN Volume Modifying Metro Mirror or Global Mirror
Controller software upgrade files using consistency groups using the CLI . . . . . . 41
PuTTY scp . . . . . . . . . . . . . 9 Starting and stopping Metro Mirror or Global
Mirror consistency-group copy processes using
the CLI . . . . . . . . . . . . . . . 42
Chapter 3. Using the CLI . . . . . . . 11 Deleting Metro Mirror or Global Mirror
Setting the clustered system time using the CLI . . 11 consistency groups using the CLI . . . . . . 42
Setting cluster date and time . . . . . . . . 12 Creating Metro Mirror and Global Mirror
Viewing and updating license settings using the CLI 12 partnerships using the CLI . . . . . . . . . 43
Displaying clustered system properties using the Modifying Metro Mirror and Global Mirror
CLI . . . . . . . . . . . . . . . . . 13 partnerships using the CLI . . . . . . . . 43
Maintaining passwords for the front panel using the Starting and stopping Metro Mirror and Global
CLI . . . . . . . . . . . . . . . . . 14 Mirror partnerships using the CLI . . . . . . 44
Re-adding a repaired node to a clustered system Deleting Metro Mirror and Global Mirror
using the CLI. . . . . . . . . . . . . . 15 partnerships using the CLI . . . . . . . . 44
Displaying node properties using the CLI . . . . 18 Determining the WWPNs of a node using the CLI 44
Discovering MDisks using the CLI . . . . . . 19 Listing node-dependent volumes using the CLI . . 45
Creating storage pools using the CLI . . . . . . 20 Determining the VDisk name from the device
Adding MDisks to storage pools using the CLI . . 23 identifier on the host . . . . . . . . . . . 46
Setting a quorum disk using the CLI . . . . . . 24 Determining the host that a VDisk (volume) is
mapped to. . . . . . . . . . . . . . . 46
iv SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
rmportip . . . . . . . . . . . . . . . 134 rmemailuser . . . . . . . . . . . . . . 176
setclustertime . . . . . . . . . . . . . 135 rmsnmpserver . . . . . . . . . . . . . 177
setsystemtime . . . . . . . . . . . . . 135 rmsyslogserver . . . . . . . . . . . . . 177
setpwdreset . . . . . . . . . . . . . . 136 sendinventoryemail . . . . . . . . . . . 178
settimezone . . . . . . . . . . . . . . 136 startemail. . . . . . . . . . . . . . . 179
startstats . . . . . . . . . . . . . . . 137 stopemail. . . . . . . . . . . . . . . 179
stopstats (Deprecated) . . . . . . . . . . 139 testemail . . . . . . . . . . . . . . . 180
stopcluster . . . . . . . . . . . . . . 139
stopsystem . . . . . . . . . . . . . . 139 Chapter 13. Enclosure commands . . 181
addcontrolenclosure . . . . . . . . . . . 181
Chapter 9. Clustered system chenclosure . . . . . . . . . . . . . . 181
diagnostic and service-aid commands 141 chenclosurecanister . . . . . . . . . . . 182
applysoftware . . . . . . . . . . . . . 141 chenclosureslot . . . . . . . . . . . . . 183
caterrlog (Deprecated) . . . . . . . . . . 143 lsenclosure . . . . . . . . . . . . . . 184
caterrlogbyseqnum (Deprecated) . . . . . . . 143 lsenclosurebattery . . . . . . . . . . . . 186
cherrstate. . . . . . . . . . . . . . . 143 lscontrolenclosurecandidate. . . . . . . . . 188
clearerrlog . . . . . . . . . . . . . . 143 lsenclosurecanister. . . . . . . . . . . . 188
dumperrlog . . . . . . . . . . . . . . 144 lsenclosurepsu . . . . . . . . . . . . . 191
finderr . . . . . . . . . . . . . . . 144 lsenclosureslot . . . . . . . . . . . . . 193
lserrlogbyfcconsistgrp (Deprecated) . . . . . . 145 triggerenclosuredump . . . . . . . . . . 195
lserrlogbyfcmap (Deprecated) . . . . . . . . 145
lserrlogbyhost (Deprecated) . . . . . . . . 145 Chapter 14. Licensing commands . . 197
lserrlogbyiogrp (Deprecated) . . . . . . . . 145 chlicense . . . . . . . . . . . . . . . 197
lserrlogbymdisk (Deprecated) . . . . . . . . 145 dumpinternallog . . . . . . . . . . . . 199
lserrlogbymdiskgrp (Deprecated) . . . . . . . 145
lserrlogbynode (Deprecated) . . . . . . . . 145 Chapter 15. IBM FlashCopy
lserrlogbyrcconsistgrp (Deprecated) . . . . . . 145
commands . . . . . . . . . . . . 201
lserrlogbyrcrelationship (Deprecated) . . . . . 145
chfcconsistgrp . . . . . . . . . . . . . 201
lserrlogbyvdisk (Deprecated) . . . . . . . . 146
chfcmap . . . . . . . . . . . . . . . 201
lserrlogdumps (Deprecated) . . . . . . . . 146
mkfcconsistgrp . . . . . . . . . . . . . 203
cheventlog . . . . . . . . . . . . . . 146
mkfcmap . . . . . . . . . . . . . . . 204
lseventlog . . . . . . . . . . . . . . 146
prestartfcconsistgrp . . . . . . . . . . . 206
lssyslogserver . . . . . . . . . . . . . 151
prestartfcmap . . . . . . . . . . . . . 208
setlocale . . . . . . . . . . . . . . . 152
rmfcconsistgrp . . . . . . . . . . . . . 209
svqueryclock . . . . . . . . . . . . . 153
rmfcmap . . . . . . . . . . . . . . . 209
writesernum. . . . . . . . . . . . . . 153
startfcconsistgrp . . . . . . . . . . . . 210
startfcmap . . . . . . . . . . . . . . 212
Chapter 10. Controller command . . . 155 stopfcconsistgrp . . . . . . . . . . . . 213
chcontroller . . . . . . . . . . . . . . 155 stopfcmap . . . . . . . . . . . . . . 214
Chapter 11. Drive commands. . . . . 157 Chapter 16. Host commands . . . . . 217
applydrivesoftware . . . . . . . . . . . 157 addhostiogrp . . . . . . . . . . . . . 217
chdrive . . . . . . . . . . . . . . . 158 addhostport . . . . . . . . . . . . . . 217
lsdrive. . . . . . . . . . . . . . . . 159 chhost . . . . . . . . . . . . . . . . 218
lsdrivelba. . . . . . . . . . . . . . . 161 mkhost . . . . . . . . . . . . . . . 220
lsdriveprogress . . . . . . . . . . . . . 162 rmhost . . . . . . . . . . . . . . . 221
triggerdrivedump . . . . . . . . . . . . 163 rmhostiogrp . . . . . . . . . . . . . . 222
rmhostport . . . . . . . . . . . . . . 223
Chapter 12. Email and event
notification commands . . . . . . . 165 Chapter 17. Information commands 225
chemail . . . . . . . . . . . . . . . 165 ls2145dumps (Deprecated) . . . . . . . . . 225
chemailserver . . . . . . . . . . . . . 167 lscimomdumps (Deprecated) . . . . . . . . 225
chemailuser . . . . . . . . . . . . . . 168 lscopystatus . . . . . . . . . . . . . . 225
chsnmpserver . . . . . . . . . . . . . 169 lsclustercandidate . . . . . . . . . . . . 226
chsyslogserver . . . . . . . . . . . . . 170 lscluster . . . . . . . . . . . . . . . 226
mkemailserver . . . . . . . . . . . . . 171 lsclusterip . . . . . . . . . . . . . . 226
mkemailuser . . . . . . . . . . . . . 172 lssystem . . . . . . . . . . . . . . . 226
mksnmpserver . . . . . . . . . . . . . 174 lssystemip . . . . . . . . . . . . . . 230
mksyslogserver . . . . . . . . . . . . . 175 lssystemstats . . . . . . . . . . . . . 232
rmemailserver . . . . . . . . . . . . . 176
Contents v
lscontroller . . . . . . . . . . . . . . 236 lssoftwaredumps (Deprecated). . . . . . . . 330
lspartnershipcandidate . . . . . . . . . . 239 lssoftwareupgradestatus . . . . . . . . . . 330
lscontrollerdependentvdisks . . . . . . . . 239 lstimezones . . . . . . . . . . . . . . 331
lscurrentuser . . . . . . . . . . . . . 240 lsuser . . . . . . . . . . . . . . . . 332
lsdiscoverystatus . . . . . . . . . . . . 241 lsusergrp . . . . . . . . . . . . . . . 333
lsdumps . . . . . . . . . . . . . . . 242 lsvdisk . . . . . . . . . . . . . . . 334
lsemailserver . . . . . . . . . . . . . 243 lsvdiskaccess . . . . . . . . . . . . . 342
lsemailuser . . . . . . . . . . . . . . 244 lsvdiskcopy . . . . . . . . . . . . . . 343
lsfabric . . . . . . . . . . . . . . . 245 lsvdiskdependentmaps . . . . . . . . . . 346
lsfcconsistgrp . . . . . . . . . . . . . 247 lsvdiskextent . . . . . . . . . . . . . 347
lsfcmap . . . . . . . . . . . . . . . 249 lsvdiskfcmapcopies . . . . . . . . . . . 348
lsfcmapcandidate . . . . . . . . . . . . 251 lsvdiskfcmappings. . . . . . . . . . . . 349
lsfcmapprogress . . . . . . . . . . . . 252 lsvdiskhostmap. . . . . . . . . . . . . 349
lsfcmapdependentmaps . . . . . . . . . . 253 lsvdisklba . . . . . . . . . . . . . . 351
lsfeaturedumps (Deprecated) . . . . . . . . 253 lsvdiskmember . . . . . . . . . . . . . 352
lsfreeextents . . . . . . . . . . . . . . 254 lsvdiskprogress . . . . . . . . . . . . . 354
lshbaportcandidate . . . . . . . . . . . 254 lsvdisksyncprogress . . . . . . . . . . . 355
lshost . . . . . . . . . . . . . . . . 255 lsdependentvdisks. . . . . . . . . . . . 356
lshostiogrp . . . . . . . . . . . . . . 258 lssasfabric . . . . . . . . . . . . . . 357
lshostvdiskmap. . . . . . . . . . . . . 259 showtimezone . . . . . . . . . . . . . 358
lsiogrp . . . . . . . . . . . . . . . 261
lsiogrphost . . . . . . . . . . . . . . 263 Chapter 18. Livedump commands . . 359
lsiogrpcandidate . . . . . . . . . . . . 264 cancellivedump. . . . . . . . . . . . . 359
lsiostatsdumps (Deprecated) . . . . . . . . 264 lslivedump . . . . . . . . . . . . . . 359
lsiotracedumps (Deprecated) . . . . . . . . 265 preplivedump . . . . . . . . . . . . . 360
lsiscsiauth . . . . . . . . . . . . . . 265 triggerlivedump . . . . . . . . . . . . 360
lslicense . . . . . . . . . . . . . . . 266
lsmdisk . . . . . . . . . . . . . . . 267
Chapter 19. Managed disk commands 363
lsmdiskdumps (Deprecated) . . . . . . . . 272
applymdisksoftware (Discontinued) . . . . . . 363
lsmdisklba . . . . . . . . . . . . . . 272
chmdisk . . . . . . . . . . . . . . . 363
lsmdiskcandidate . . . . . . . . . . . . 273
chquorum . . . . . . . . . . . . . . 363
lsmdiskextent . . . . . . . . . . . . . 275
dumpallmdiskbadblocks. . . . . . . . . . 365
lsmdiskgrp . . . . . . . . . . . . . . 276
dumpmdiskbadblocks . . . . . . . . . . 366
lsmdiskmember . . . . . . . . . . . . 280
includemdisk . . . . . . . . . . . . . 367
lsmigrate . . . . . . . . . . . . . . . 281
setquorum (Deprecated) . . . . . . . . . . 368
lsnode (SAN Volume Controller) / lsnodecanister
triggermdiskdump (Discontinued) . . . . . . 368
(Storwize V7000) . . . . . . . . . . . . 282
lsnodecandidate (SAN Volume Controller). . . . 286
lsnodedependentvdisks (Deprecated) . . . . . 287 Chapter 20. Managed disk group
lsnodehw (SAN Volume Controller) / commands . . . . . . . . . . . . 369
lsnodecanisterhw (Storwize V7000) . . . . . . 288 addmdisk . . . . . . . . . . . . . . 369
lsnodestats (SAN Volume Controller) / chmdiskgrp . . . . . . . . . . . . . . 370
lsnodecanisterstats (Storwize V7000). . . . . . 290 mkmdiskgrp . . . . . . . . . . . . . 371
lsnodevpd (SAN Volume Controller) / rmmdisk . . . . . . . . . . . . . . . 373
lsnodecanistervpd (Storwize V7000) . . . . . . 297 rmmdiskgrp . . . . . . . . . . . . . . 374
lspartnership . . . . . . . . . . . . . 303
lspartnershipcandidate . . . . . . . . . . 304 Chapter 21. Metro Mirror and Global
lsportip . . . . . . . . . . . . . . . 305 Mirror commands . . . . . . . . . 377
lsportfc . . . . . . . . . . . . . . . 309
chpartnership . . . . . . . . . . . . . 377
lsportsas . . . . . . . . . . . . . . . 311
chrcconsistgrp . . . . . . . . . . . . . 378
lsquorum . . . . . . . . . . . . . . . 312
chrcrelationship . . . . . . . . . . . . 379
lsrcconsistgrp . . . . . . . . . . . . . 313
mkpartnership . . . . . . . . . . . . . 383
lsrcrelationship . . . . . . . . . . . . . 316
mkrcconsistgrp . . . . . . . . . . . . . 383
lsrcrelationshipcandidate . . . . . . . . . 319
mkrcrelationship . . . . . . . . . . . . 384
lsrcrelationshipprogress . . . . . . . . . . 320
rmpartnership . . . . . . . . . . . . . 387
lsrepairsevdiskcopyprogress . . . . . . . . 321
rmrcconsistgrp . . . . . . . . . . . . . 388
lsrepairvdiskcopyprogress . . . . . . . . . 322
rmrcrelationship . . . . . . . . . . . . 388
lsrmvdiskdependentmaps . . . . . . . . . 324
startrcconsistgrp . . . . . . . . . . . . 389
lsroute. . . . . . . . . . . . . . . . 325
startrcrelationship . . . . . . . . . . . . 392
lssevdiskcopy . . . . . . . . . . . . . 326
stoprcconsistgrp . . . . . . . . . . . . 394
lssnmpserver . . . . . . . . . . . . . 329
stoprcrelationship . . . . . . . . . . . . 396
vi SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
switchrcconsistgrp . . . . . . . . . . . . 397 Chapter 27. Tracing commands. . . . 441
switchrcrelationship . . . . . . . . . . . 398 setdisktrace . . . . . . . . . . . . . . 441
settrace . . . . . . . . . . . . . . . 441
Chapter 22. Migration commands . . . 401 starttrace . . . . . . . . . . . . . . . 444
migrateexts . . . . . . . . . . . . . . 401 stoptrace . . . . . . . . . . . . . . . 444
migratetoimage. . . . . . . . . . . . . 402
migratevdisk . . . . . . . . . . . . . 403 Chapter 28. User management
commands . . . . . . . . . . . . 447
Chapter 23. Service information chauthservice . . . . . . . . . . . . . 447
commands . . . . . . . . . . . . 405 chcurrentuser . . . . . . . . . . . . . 449
lscmdstatus . . . . . . . . . . . . . . 405 chldap. . . . . . . . . . . . . . . . 450
lsfiles . . . . . . . . . . . . . . . . 405 chldapserver . . . . . . . . . . . . . 452
lshardware . . . . . . . . . . . . . . 406 chuser . . . . . . . . . . . . . . . . 454
lsservicenodes . . . . . . . . . . . . . 409 chusergrp . . . . . . . . . . . . . . 455
lsservicerecommendation . . . . . . . . . 411 getstatus . . . . . . . . . . . . . . . 456
lsservicestatus . . . . . . . . . . . . . 411 mkuser . . . . . . . . . . . . . . . 456
lsldap . . . . . . . . . . . . . . . . 457
Chapter 24. Service mode commands lsldapserver . . . . . . . . . . . . . . 459
mkldapserver . . . . . . . . . . . . . 460
(Discontinued) . . . . . . . . . . . 419
mkusergrp . . . . . . . . . . . . . . 461
applysoftware (Discontinued) . . . . . . . . 419
rmldapserver . . . . . . . . . . . . . 462
cleardumps . . . . . . . . . . . . . . 419
rmuser . . . . . . . . . . . . . . . 463
dumperrlog . . . . . . . . . . . . . . 420
rmusergrp . . . . . . . . . . . . . . 463
exit (Discontinued) . . . . . . . . . . . 421
testldapserver . . . . . . . . . . . . . 464
Chapter 25. Service mode information Chapter 29. Virtual disk commands 467
commands (Discontinued) . . . . . . 423 addvdiskcopy . . . . . . . . . . . . . 467
ls2145dumps (Discontinued) . . . . . . . . 423 addvdiskaccess . . . . . . . . . . . . . 472
lscimomdumps (Discontinued) . . . . . . . 423 chvdisk . . . . . . . . . . . . . . . 473
lsclustervpd (Discontinued). . . . . . . . . 423 movevdisk . . . . . . . . . . . . . . 476
lserrlogdumps (Discontinued) . . . . . . . . 423 expandvdisksize . . . . . . . . . . . . 477
lsfeaturedumps (Discontinued) . . . . . . . 423 mkvdisk . . . . . . . . . . . . . . . 479
lsiostatsdumps (Discontinued) . . . . . . . . 423 mkvdiskhostmap . . . . . . . . . . . . 486
lsiotracedumps (Discontinued) . . . . . . . 423 recovervdisk. . . . . . . . . . . . . . 488
lsmdiskdumps (Discontinued) . . . . . . . . 423 recovervdiskbycluster . . . . . . . . . . 489
lssoftwaredumps (Discontinued) . . . . . . . 423 recovervdiskbysystem . . . . . . . . . . 489
recovervdiskbyiogrp . . . . . . . . . . . 490
Chapter 26. Service task commands 425 repairsevdiskcopy . . . . . . . . . . . . 490
chenclosurevpd. . . . . . . . . . . . . 425 repairvdiskcopy . . . . . . . . . . . . 491
chnodeled . . . . . . . . . . . . . . 426 rmvdisk . . . . . . . . . . . . . . . 492
chserviceip . . . . . . . . . . . . . . 426 rmvdiskcopy . . . . . . . . . . . . . 494
chwwnn . . . . . . . . . . . . . . . 428 rmvdiskaccess . . . . . . . . . . . . . 495
cpfiles . . . . . . . . . . . . . . . . 429 rmvdiskhostmap . . . . . . . . . . . . 496
installsoftware . . . . . . . . . . . . . 430 shrinkvdisksize . . . . . . . . . . . . . 496
leavecluster . . . . . . . . . . . . . . 431 splitvdiskcopy . . . . . . . . . . . . . 499
metadata . . . . . . . . . . . . . . . 431
mkcluster. . . . . . . . . . . . . . . 432 Chapter 30. Command-line interface
rescuenode . . . . . . . . . . . . . . 433 messages . . . . . . . . . . . . . 501
resetpassword . . . . . . . . . . . . . 434
restartservice . . . . . . . . . . . . . 434
setlocale (satask) . . . . . . . . . . . . 435 Appendix. Accessibility features for
setpacedccu . . . . . . . . . . . . . . 436 IBM SAN Volume Controller . . . . . . 597
settempsshkey . . . . . . . . . . . . . 436
snap . . . . . . . . . . . . . . . . 437 Notices . . . . . . . . . . . . . . 599
startservice . . . . . . . . . . . . . . 437 Trademarks . . . . . . . . . . . . . . 601
stopnode . . . . . . . . . . . . . . . 438
stopservice . . . . . . . . . . . . . . 438 Index . . . . . . . . . . . . . . . 603
t3recovery . . . . . . . . . . . . . . 439
Contents vii
viii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Tables
1. SAN Volume Controller library. . . . . . xiv 34. MDisk output . . . . . . . . . . . 269
2. Other IBM publications . . . . . . . . xv 35. lsmdisklba command output . . . . . . 273
3. IBM documentation and related websites xv 36. lsnode or lsnodecanister attribute values 284
4. Valid filter attributes . . . . . . . . . xxvi 37. lsnodecandidate outputs . . . . . . . . 287
5. Maximum volume capacity by extent size 22 38. Attribute values for lsnodehw and
6. Memory required for Volume Mirroring and lsnodecanisterhw . . . . . . . . . . 288
Copy Services . . . . . . . . . . . . 25 39. Attribute values for lsnodestats or
7. RAID level comparisons . . . . . . . . 25 lsnodecanister . . . . . . . . . . . 291
8. Volume copy resynchronization rates . . . . 27 40. Stat_name field values . . . . . . . . 295
9. charraymember combination options . . . . 88 41. lspartnership attribute values . . . . . . 303
10. MDisk output . . . . . . . . . . . . 91 42. lsportip output . . . . . . . . . . . 307
11. lsarrayinitprogress output . . . . . . . . 93 43. lsportfc output . . . . . . . . . . . 310
12. lsarraylba output. . . . . . . . . . . 95 44. lsportsas output. . . . . . . . . . . 312
13. lsarraymemberoutput . . . . . . . . . 96 45. lsrcconsistgrp command output values 315
14. lsarraymembergoals output . . . . . . . 98 46. lsrcrelationship command attributes and
15. lsarraymemberprogress output . . . . . . 99 values . . . . . . . . . . . . . . 317
16. lsarraysyncprogress output . . . . . . . 101 47. lsvdisklba command output scenarios 352
17. IP address list formats . . . . . . . . 122 48. lssasfabric output . . . . . . . . . . 357
18. Memory required for volume Mirroring and 49. lslivedump outputs . . . . . . . . . 360
Copy Services . . . . . . . . . . . 124 50. Number of extents reserved by extent size 365
19. RAID level comparisons . . . . . . . . 124 51. stoprcconsistgrp consistency group states 395
20. lseventlog output . . . . . . . . . . 147 52. stoprcrelationship consistency group states 397
21. lsdrive output . . . . . . . . . . . 159 53. lshardware attribute values . . . . . . . 407
22. lsdrivelba output . . . . . . . . . . 161 54. lsservicenodes outputs . . . . . . . . 409
23. lsenclosure output . . . . . . . . . . 185 55. lsservicenodes outputs . . . . . . . . 410
24. lsenclosurebattery outputs . . . . . . . 187 56. lsservicestatus output . . . . . . . . . 412
25. lscontrolenclosurecandidate attribute values 188 57. lsservicestatus output . . . . . . . . . 414
26. lsenclosurecanister output . . . . . . . 189 58. lsldap attribute values . . . . . . . . 458
27. lsenclosurepsu output. . . . . . . . . 191 59. lsldapserver attribute values . . . . . . 459
28. lsenclosureslot output . . . . . . . . . 193 60. testldapserver attribute values . . . . . . 465
29. Relationship between the rate, data rate and 61. Storage pool Easy Tier settings . . . . . . 469
grains per second values . . . . . . . . 203 62. Relationship between the rate value and the
30. Relationship between the rate, data rate and data copied per second . . . . . . . . 471
grains per second values . . . . . . . . 206 63. Relationship between the rate value and the
31. Attribute values. . . . . . . . . . . 227 data copied per second . . . . . . . . 475
32. lssystemstats attribute values . . . . . . 233 64. Relationship between the rate value and the
33. Stat_name field values . . . . . . . . 234 data copied per second . . . . . . . . 485
Before you use the SAN Volume Controller, you should have an understanding of storage area networks
(SANs), the storage requirements of your enterprise, and the capabilities of your storage units.
Accessibility
IBM has a long-standing commitment to people with disabilities. In keeping with that commitment to
accessibility, IBM strongly supports the U.S. Federal government's use of accessibility as a criterion in the
procurement of Electronic Information Technology (EIT).
IBM strives to provide products with usable access for everyone, regardless of age or ability.
For more information, see Accessibility features for IBM SAN Volume Controller, on page 597.
New commands
The following new commands have been added for this edition:
v lsenclosurestats
v lsportsas on page 311
Changed commands
The following commands and topics have been updated for this edition:
v addnode (SAN Volume Controller only) on page 113
v addcontrolenclosure on page 181
v Adding MDisks to storage pools using the CLI on page 23
v applydrivesoftware on page 157
v addnode (SAN Volume Controller only) on page 113
v addvdiskcopy on page 467
v chemail on page 165
v chemailuser on page 168
v chenclosure on page 181
Deprecated commands
xii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
New topics
Changed topics
Emphasis
Different typefaces are used in this guide to show emphasis.
The IBM System Storage SAN Volume Controller Information Center contains all of the information that
is required to install, configure, and manage the SAN Volume Controller. The information center is
updated between SAN Volume Controller product releases to provide the most current documentation.
The information center is available at the following website:
publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
Unless otherwise noted, the publications in the SAN Volume Controller library are available in Adobe
portable document format (PDF) from the following website:
www.ibm.com/storage/support/2145
Each of the PDF publications in Table 1 on page xiv is available in this information center by clicking the
number in the Order number column:
xiv SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 1. SAN Volume Controller library (continued)
Title Description Order number
IBM Statement of Limited Warranty This multilingual document provides Part number: 4377322
(2145 and 2076) information about the IBM warranty
for machine types 2145 and 2076.
IBM License Agreement for Machine This multilingual guide contains the SC28-6872 (contains Z125-5468)
Code License Agreement for Machine Code
for the SAN Volume Controller
product.
Table 2 lists IBM publications that contain information related to the SAN Volume Controller.
Table 2. Other IBM publications
Title Description Order number
IBM System Storage Productivity This guide introduces the IBM System SC23-8824
Center Introduction and Planning Storage Productivity Center hardware and
Guide software.
Read This First: Installing the IBM This guide describes how to install the GI11-8938
System Storage Productivity Center IBM System Storage Productivity Center
hardware.
IBM System Storage Productivity This guide describes how to configure the SC27-2336
Center User's Guide IBM System Storage Productivity Center
software.
IBM System Storage Multipath This guide describes the IBM System GC52-1309
Subsystem Device Driver User's Guide Storage Multipath Subsystem Device
Driver for IBM System Storage products
and how to use it with the SAN Volume
Controller.
IBM Storage Management Pack for This guide describes how to install, GC27-3909
Microsoft System Center Operations configure, and use the IBM Storage
Manager User Guide Management Pack for Microsoft System publibfp.dhe.ibm.com/epubs/
Center Operations Manager (SCOM). pdf/c2739092.pdf
IBM Storage Management Console for This publication describes how to install, GA32-0929
VMware vCenter, version 3.0.0, User configure, and use the IBM Storage
Guide Management Console for VMware vCenter, publibfp.dhe.ibm.com/epubs/
which enables SAN Volume Controller and pdf/a3209295.pdf
other IBM storage systems to be integrated
in VMware vCenter environments.
Table 3 lists websites that provide publications and other information about the SAN Volume Controller
or related products or technologies.
Table 3. IBM documentation and related websites
Website Address
Support for SAN Volume Controller (2145) www.ibm.com/storage/support/2145
Support for IBM System Storage and IBM www.ibm.com/storage/support/
TotalStorage products
IBM Publications Center www.ibm.com/e-business/linkweb/publications/servlet/pbi.wss
To view a PDF file, you need Adobe Acrobat Reader, which can be downloaded from the Adobe website:
www.adobe.com/support/downloads/main.html
The IBM Publications Center offers customized search functions to help you find the publications that
you need. Some publications are available for you to view or download at no charge. You can also order
publications. The publications center displays prices in your local currency. You can access the IBM
Publications Center through the following website:
www.ibm.com/e-business/linkweb/publications/servlet/pbi.wss
To submit any comments about this book or any other SAN Volume Controller documentation:
v Go to the feedback page on the website for the SAN Volume Controller Information Center at
publib.boulder.ibm.com/infocenter/svc/ic/index.jsp?topic=/com.ibm.storage.svc.console.doc/
feedback.htm. There you can use the feedback page to enter and submit comments or browse to the
topic and use the feedback link in the running footer of that page to identify the topic for which you
have a comment.
v Send your comments by email to [email protected]. Include the following information for this
publication or use suitable replacements for the publication title and form number for the publication
on which you are commenting:
Publication title: IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line
Interface User's Guide
Publication form number: GC27-2287-01
Page, table, or illustration numbers that you are commenting on
A detailed description of any information that should be changed
Syntax diagrams
A syntax diagram uses symbols to represent the elements of a command and to specify the rules for
using these elements.
The following table explains how to read the syntax diagrams that represent the command-line interface
(CLI) commands. In doing so, it defines the symbols that represent the CLI command elements.
xvi SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Element Syntax Description
Main path line >>><>() () () >>Begins on the left with double
arrowheads ()>< and ends on the
right with two arrowheads facing
each other (). If a diagram is longer
than one line, each line to be
continued ends with a single>
arrowhead () and the next line
begins with a single arrowhead.
Read the diagrams from
lefttoright, toptobottom,
following the main path line.
Keyword Represents the name of a command,
esscli flag, parameter, or argument. A
keyword is not in italics. Spell a
keyword exactly as it is shown in
the syntax diagram.
Required keywords Indicate the parameters or
a AccessFile arguments that you must specify for
u Userid the command. Required keywords
p Password appear on the main path line.
Required keywords that cannot be
used together are stacked vertically.
Optional keywords Indicate the parameters or
arguments that you can choose to
h specify for the command. Optional
-help keywords appear below the main
? path line. Mutually exclusive
optional keywords are stacked
vertically.
Default value Appears above the main path line.
FCP
protocol = FICON
ProfileName "
Terminology
These are abbreviations that are most commonly used for the command-line interface operations.
xviii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
operation of a command. You can use multiple flags, followed by parameters, when you issue a
command. The - character cannot be used as the first character of an object name.
vertical bar (|)
A vertical bar signifies that you choose only one value. For example, [ a | b ] in brackets
indicates that you can choose a, b, or nothing. Similarly, { a | b } in braces indicates that you
must choose either a or b.
The SAN Volume Controller supports the use of the asterisk character (*) as a wildcard within the
arguments of certain parameters. There are some behavioral issues that must be considered when using
wildcards in order to prevent unexpected results. These behavioral issues and the ways to avoid them are
as follows:
1. Running the command while logged onto the node.
The shell will attempt to interpret any of the special characters if they are not escaped (preceded with
a backslash character). Wildcards will be expanded into a list of files if any files exist that match the
wildcards. If no matching files exist, the wildcard is passed to the SAN Volume Controller command
untouched.
To prevent expansion, issue the following command in one of its formats:
Note: When creating a new object, the clustered system (system) assigns a default -type name if one is
not specified. The default -type name consists of the object prefix and the lowest available integer starting
from 0 (except for nodes starting from 1); for example, vdisk23; the default -type name must be unique.
The file name filter can be any valid file name, containing a maximum of 128
characters, with or without the * (wildcard), and appended to the end of a
directory value. Valid characters are:
v * (asterisk/wildcard)
v . (the field must not start with, end with, or contain two consecutive periods)
v /
v -
v _
v az
v AZ
v 09
filename_prefix The prefix of a file name, containing a maximum of 128 characters. Valid characters
are:
v -
v _
v az
v AZ
v 09
xx SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Data types Value ranges
name_arg Names can be specified or changed using the create and modify functions. The view
commands provide both the name and ID of an object.
Note: The system name is set when the system is created.
The first character of a name_arg must be nonnumeric. The first character of an object
name cannot be a(dash) because the CLI (command-line interface) interprets it as
being the next parameter.
The standard defines a way to extend the serial number using letters in the place of
numbers in the 5-digit field.
ip_address_arg The argument follows the standard rules for dotted decimal notation.
The following Internet Protocol 4 (IPv4) and Internet Protocol 6 (IPv6) address
formats are supported:
IPv4 (no port set, SAN Volume Controller uses default)
1.2.3.4
IPv4 with specific port
1.2.3.4:22
Full IPv6, default port
1234:1234:0001:0123:1234:1234:1234:1234
Full IPv6, default port, leading zeros suppressed
1234:1234:1:123:1234:1234:1234:1234
Full IPv6 with port
[2002:914:fc12:848:209:6bff:fe8c:4ff6]:23
Zero-compressed IPv6, default port
2002::4ff6
Zero-compressed IPv6 with port
[2002::4ff6]:23
dns_name This is the dotted domain name for the system subnet (for example, ibm.com).
A combination of the host name and the dns_name is used to access the system, for
example: https://hostname.ibm.com/
capacity_value The capacity expressed within a range of 512 bytes to 2 petabytes (PB).
Tip: Specify the capacity as megabytes (MB), kilobytes (KB), gigabytes (GB), or PB.
When using MB, specify the value in multiples of 512 bytes. A capacity of 0 is valid
for a striped or sequential volume. The smallest number of supported bytes is 512.
node_id A node ID differs from other IDs in that it is a unique ID assigned when a node is
used to create a system, or when a node is added to a system. A node_id value is
never reused in a system.
Node IDs are internally represented as 64-bit numbers, and like other IDs, cannot be
modified by user commands.
xxx_id All objects are referred to by unique integer IDs, assigned by the system when the
objects are created. All IDs are represented internally as 32-bit integers; node IDs are
an exception.
xxii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Data types Value ranges
threads_arg An 8-bit unsigned integer, expressed in decimal format. Valid values are 1, 2, 3, or 4.
velocity_arg The fabric speed in gigabytes per second (GBps). Valid values are 1 or 2.
timezone_arg The ID as detailed in the output of the lstimezones command.
timeout_arg The command timeout period. An integer from 0 to 600 (seconds).
stats_time_arg The frequency at which statistics are gathered. Valid values are 1 to 60 minutes in
increments of 1 minute.
directory_arg Specifies a directory, file name filter, or both, within the specified directory. Valid
directory values are:
v /dumps
v /dumps/audit
v /dumps/cimom
v /dumps/configs
v /dumps/elogs
v /dumps/feature
v /dumps/iostats
v /dumps/iotrace
v /home/admin/upgrade
The file name filter can be any valid file name, containing a maximum of 128
characters, with or without the * (wildcard), and appended to the end of a
directory value. Valid characters are:
v *
v . (the field must not start with, end with, or contain two consecutive periods)
v /
v -
v _
v az
v AZ
v 09
locale_arg The system locale setting. Valid values are:
v 0 en_US: US English (default)
v 1 zh_CN: Simplified Chinese
v 2 zh_TW: Traditional Chinese
v 3 ja_JP: Japanese
v 4 fr_FR: French
v 5 de_DE: German
v 6 it_IT: Italian
v 7 es_ES: Spanish
key_arg A user-defined identifier for a secure shell (SSH) key, containing a maximum of 30
characters.
user_arg Specifies the user: admin or service.
copy_rate A numeric value of 0100.
copy_type Specifies the Mirror copy type: Metro or Global.
The SAN Volume Controller command-line interface offers command line completion for command entry.
Command line completion allows you to type in the first few characters of a command and press the Tab
key to fill in the rest of the command name. If there are multiple commands that start with the same
characters, then a list of possible commands is returned. You can type in more characters until the
command name is unambiguous.
CLI parameters can be entered in any order except in the following situations:
v When a command name is specified, the first argument given must be the action that you want to be
performed.
v Where you are performing an action against a specific object, the object ID or name must be the last
argument in the line.
CLI flags
The following flags are common to all command-line interface (CLI) commands.
-? or -h
Print help text. For example, issuing lscluster -h provides a list of the actions available with the
lscluster command.
xxiv SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-nomsg
When used, this flag prevents the display of the successfully created output. For example, if
you issue the following command:
mkmdiskgrp -ext 16
it displays:
This parameter can be entered for any command, but is only acted upon by those commands that
generate the successfully created outputs. All other commands ignore this parameter.
CLI messages
Ensure that you are familiar with the command-line interface (CLI) messages.
When some commands complete successfully, textual output is normally provided. However, some
commands do not provide any output. The phrase No feedback is used to indicate that no output is
provided. If the command does not complete successfully, an error is generated. For example, if the
command has failed as a result of the cluster being unstable, the following output is provided:
v CMMVC5786E The action failed because the cluster is not in a stable state.
The -filtervalue parameter must be specified with attrib=value The -filtervalue? and -filtervalue
parameters cannot be specified together.
Note: The qualifier characters left bracket (<) and right bracket (>) must be enclosed within double
quotation marks (""). For example, -filtervalue vdisk_count "<"4 or port_count ">"1. It is also valid to
include the entire expression within double quotation marks. For example,-filtervalue "vdisk_count<4"
When an attribute requires the -unit parameter, it is specified after the attribute. For example, -filtervalue
capacity=24 -unit mb. The following input options are valid for the -unit parameter:
v b (bytes)
v mb (Megabytes)
v gb (Gigabytes)
v tb (Terabytes)
v pb (Petabytes)
Capacity values displayed in units other than bytes might be rounded. When filtering on capacity, use a
unit of bytes, -unit b, for exact filtering.
You can use the asterisk (*) character as a wildcard character when names are used. The asterisk
character can be used either at the beginning or the end of a text string, but not both. Only one asterisk
character can be used in a -filtervalue parameter.
Table 4. Valid filter attributes
Object Attribute Valid Qualifiers Wildcard Description
Valid
cluster cluster_name or name = Yes The cluster name.
cluster_unique_id or id =, <, <=, >, >= No The cluster ID.
node node_name or name = Yes The node name.
id =, <, <=, >, >= No The node ID.
status = No The status of the node. The
following values are valid for node
status:
v adding
v deleting
v online
v offline
v pending
IO_group_name = Yes The I/O group name.
IO_group_id =, <, <=, >, >= No The I/O group ID.
hardware = No The following values are valid for
hardware type: 8F2, 8F4, 8G4, CF8,
and 8A4.
io_grp HWS_name or name = Yes The I/O group name.
HWS_unique_id or id =, <, <=, >, >= No The I/O group ID.
node_count =, <, <=, >, >= No The number of nodes in the I/O
group.
host_count =, <, <=, >, >= No The number of hosts associated with
the io_grp.
controller controller_id or id =, <, <=, >, >= No The controller ID.
xxvi SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 4. Valid filter attributes (continued)
Object Attribute Valid Qualifiers Wildcard Description
Valid
mdisk name = Yes The name of the MDisk.
id =, <, <=, >, >= No The ID of the MDisk.
controller_name = Yes The name of the controller the
MDisk belongs to.
status = No The status of the MDisk.
xxviii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 4. Valid filter attributes (continued)
Object Attribute Valid Qualifiers Wildcard Description
Valid
vdisk_copy primary = No Indicates that this copy is the
primary copy. The valid values are
yes and no.
status = No The status of the MDisk group. Valid
values are online, degraded, or
offline.
sync = No Indicates whether the VDisk copy is
synchronized. Valid values are true
or false.
mdisk_grp_name = Yes The name of the MDisk group.
mdisk_grp_id =, <, <=, >, >= No The ID of the MDisk group.
type = No The type of the VDisk copy. The
valid values are seq, striped, or
image.
easy_tier = No Determines if Easy Tier is permitted
to manage the storage pool:
v on
v off
easy_tier_status = No Determines if automatic data
placement function on a storage pool
is activated:
v active
v measured
v inactive
se_vdiskcopy mdisk_grp_id =, <, <=, >, >= No The ID of the MDisk group.
mdisk_grp_name = Yes The name of the MDisk group.
overallocation =, <, <=, >, >= No The percentage of overallocation,
which is displayed as a number.
autoexpand = No Autoexpand flags. The valid values
are on and off.
grainsize =, <, <=, >, >= No Space-efficient grain size.
xxx SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 4. Valid filter attributes (continued)
Object Attribute Valid Qualifiers Wildcard Description
Valid
rcrelationship RC_rel_id or id =, <, <=, >, >= No The Metro Mirror relationship ID.
RC_rel_name or name = Yes The Metro Mirror relationship name.
master_cluster_id =, <, <=, >, >= No The master cluster ID.
master_cluster_name = Yes The master cluster name.
master_vdisk_id =, <, <=, >, >= No The master VDisk ID.
master_vdisk_name = Yes The master VDisk name.
aux_cluster_id =, <, <=, >, >= No The aux cluster ID.
aux_cluster_name = Yes The aux cluster name.
aux_vdisk_id =, <, <=, >, >= No The aux VDisk ID.
aux_vdisk_name = Yes The aux VDisk name.
primary = No The relationship primary. The
following values are valid for
primary:
v master
v aux
consistency_group_id =, <, <=, >, >= No The Metro Mirror consistency group
ID.
consistency_group_name = Yes The Metro Mirror consistency group
name.
state = Yes The relationship state. The following
values are valid for state:
v inconsistent_stopped
v inconsistent_copying
v consistent_stopped
v consistent_synchronized
v idling
v idling_disconnected
v inconsistent_disconnected
v consistent_disconnected
progress =, <, <=, >, >= No The progress of the initial
background copy (synchronization)
for the relationship.
xxxii SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 4. Valid filter attributes (continued)
Object Attribute Valid Qualifiers Wildcard Description
Valid
clusterip port_id =, <, <=, >, >= No The port ID. The valid values are 1
or 2.
cluster_name = Yes The cluster name.
cluster_id =, <, <=, >, >= No The cluster ID.
Overview
The SAN Volume Controller clustered system acts as the SSH server in this relationship. The SSH client
provides a secure environment in which to connect to a remote machine. Authentication is performed
using an SVC_username and password. If you require command-line access without entering a password,
it uses the principles of public and private keys for authentication.
Generate a Secure Shell (SSH) key pair to use the SAN Volume Controller command-line interface (CLI).
Additionally, when you use the Secure Shell (SSH) to log in to the SAN Volume Controller or Storwize
V7000, you must use the RSA-based private key authentication.
When you are using AIX hosts, SSH logins are authenticated on the system using the RSA-based
authentication that is supported in the OpenSSH client that is available for AIX. This scheme is based on
the supplied password (or if you require command-line access without entering a password, then
public-key cryptography is used) by using an algorithm known commonly as RSA.
Note: The authentication process for host systems that are not AIX is similar.
With this scheme (as in similar OpenSSH systems on other host types), the encryption and decryption is
done using separate keys. This means that it is not possible to derive the decryption key from the
encryption key.
Because physical possession of the private key allows access to the system, the private key must be kept
in a protected place, such as the .ssh directory on the AIX host, with restricted access permissions.
When SSH client (A) attempts to connect to SSH server (B), the SSH password (if you require
command-line access without entering a password, the key pair) authenticates the connection. The key
consists of two halves: the public keys and private keys. The SSH client public key is put onto SSH
Server (B) using some means outside of the SSH session. When SSH client (A) tries to connect, the private
key on SSH client (A) is able to authenticate with its public half on SSH server (B).
To connect to the system, the SSH client requires a user login name and an SSH password (or if you
require command-line access without entering a password, the key pair). Authenticate to the system
using a SAN Volume Controller management user name and password. When using an SSH client to
access a SAN Volume Controller system, you must use your SVC_username and password. The SAN
Volume Controller system uses the password (and if not a password, the SSH key pair) to authorize the
user accessing the system.
You can connect to the system using the same user name with which you log into SAN Volume
Controller.
For Microsoft Windows hosts, PuTTY can be downloaded from the Internet and used at no charge to
provide an SSH client.
Storwize V7000: You can connect to the system using the same user name with which you log into
Storwize V7000.
The IBM System Storage Productivity Center (SSPC) and the workstation for the SAN Volume Controller
include the PuTTY client program, which is a Microsoft Windows SSH client program. The PuTTY client
program can be installed on your SSPC or workstation server in one of these ways:
v If you purchased the SSPC or the workstation hardware option from IBM, the PuTTY client program
has been preinstalled on the hardware.
v You can use the workstation software installation CD to install the PuTTY client program. The SSPC,
workstation hardware option, and the software-only workstation each provide this CD.
v You can use the separate PuTTY client program-installation wizard, putty-version-installer.exe. You can
download the PuTTY client program from this website:
Download Putty (http://www.putty.org/)
Note: Before you install the PuTTY client program, ensure that your Windows system meets the system
requirements. See the IBM System Storage Productivity Center Introduction and Planning Guide for system
requirements.
If you want to use an SSH client other than the PuTTY client, this website offers SSH client alternatives
for Windows:
www.openssh.org/windows.html
You can connect to the system using the same user name with which you log into SAN Volume
Controller.
Storwize V7000: You can connect to the system using the same user name with which you log into
Storwize V7000.
Perform the following steps to generate SSH keys using the PuTTY key generator (PuTTYgen):
Procedure
1. Start PuTTYgen by clicking Start > Programs > PuTTY > PuTTYgen. The PuTTY Key Generator
panel is displayed.
2. Click SSH-2 RSA as the type of key to generate.
2 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
a. Click Save public key. You are prompted for the name and location of the public key.
b. Type icat.pub as the name of the public key and specify the location where you want to save the
public key. For example, you can create a directory on your computer called keys to store both the
public and private keys.
c. Click Save.
6. Save the private key by performing the following steps:
a. Click Save private key. The PuTTYgen Warning panel is displayed.
b. Click Yes to save the private key without a passphrase.
c. Type icat as the name of the private key, and specify the location where you want to save the
private key. For example, you can create a directory on your computer called keys to store both the
public and private keys. It is recommended that you save your public and private keys in the
same location.
d. Click Save.
7. Close the PuTTY Key Generator window.
Attention: Do not run scripts that create child processes that run in the background and call SAN
Volume Controller commands. This can cause the system to lose access to data and cause data to be lost.
Perform the following steps to configure a PuTTY session for the CLI:
Procedure
1. Select Start > Programs > PuTTY > PuTTY. The PuTTY Configuration window opens.
2. Click Session in the Category navigation tree. The Basic options for your PuTTY session are
displayed.
3. Click SSH as the Protocol option.
4. Click Only on clean exit as the Close window on exit option. This ensures that connection errors are
displayed.
5. Click Connection > SSH in the Category navigation tree. The options controlling SSH connections
are displayed.
6. Click 2 as the Preferred SSH protocol version.
7. Click Connection > SSH > Auth in the Category navigation tree. The Options controller SSH
authentication are displayed.
8. Click Browse or type the fully qualified file name and location of the SSH client and password. If no
password is used, the private key in the Private key file for authentication field.
9. Click Connection > Data in the Category navigation tree.
10. Type the user name that you want to use on the SAN Volume Controller in the Auto-login
username field.
11. Click Session in the Category navigation tree. The Basic options for your PuTTY session are
displayed.
12. In the Host Name (or IP Address) field, type the name or Internet Protocol (IP) address of one of the
SAN Volume Controller clustered system (system) IP addresses or host names.
13. Type 22 in the Port field. The SAN Volume Controller system uses the standard SSH port.
Results
Note: If you configured more than one IP address for the SAN Volume Controller system, repeat the
previous steps to create another saved session for the second IP address. This can then be used if the first
IP address is unavailable.
Note: Windows users can download PuTTY from the following website: Download Putty.
The Secure Shell (SSH) protocol specifies that the first access to a new host server sends a challenge to
the SSH user to accept the SSH server public key or user password. Because this is the first time that you
connect to an SSH server, the server is not included in the SSH client list of known hosts. Therefore, there
is a fingerprint challenge, which asks if you accept the responsibility of connecting with this host. If you
type y, the host fingerprint and IP address are saved by the SSH client.
When you use PuTTY, you must also type y to accept this host fingerprint. However, the host fingerprint
and IP address are stored in the registry for the user name that is logged onto Windows.
The SSH protocol also specifies that once the SSH server public key is accepted, another challenge is
presented if the fingerprint of an SSH server changes from the one previously accepted. In this case, you
must decide if you want to accept this changed host fingerprint.
Note: The SSH server keys on the SAN Volume Controller are regenerated when a microcode load is
performed on the clustered system. As a result, a challenge is sent because the fingerprint of the SSH
server has changed.
All command-line interface (CLI) commands are run in an SSH session. You can run the commands in
one of the following modes:
v An interactive prompt mode
v A single line command mode, which is entered one time to include all parameters
Interactive mode
For interactive mode, you can use the PuTTY executable to open the SSH restricted shell.
The following is an example of the command that you can issue to start interactive mode:
C:\support utils\putty <username>@svcconsoleip
where support utils\putty is the location of your putty.exe file, svcconsoleip is the IP address of your
management GUI, and <username> is the user name that you want to use on SAN Volume Controller.
If you were to issue the lsuser command, which lists the SSH client public keys that are stored on the
SAN Volume Controller clustered system, the following output is displayed when ssh_key=yes:
4 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
IBM_2145:cluster0:superuser>lsuser
id name password ssh_key remote usergrp_id usergrp_name
0 superuser yes yes no 0 SecurityAdmin
1 smith no yes no 4 Monitor
2 jones no yes no 2 CopyOperator
You can type exit and press Enter to escape the interactive mode command.
The following is an example of the host fingerprint challenge when using plink in interactive mode:
C:\Program Files\IBM\svcconsole\cimom>plink [email protected]
The servers host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is.
The servers key fingerprint is:
ssh-rsa 1024 e4:c9:51:50:61:63:e9:cd:73:2a:60:6b:f0:be:25:bf
If you trust this host, enter "y" to add the key to
PuTTYs cache and carry on connecting.
If you want to carry on connecting just once, without
adding the key to the cache, enter "n".
If you do not trust this host, press Return to abandon the
connection.
Store key in cache? (y/n) y
Using user name "superuser".
Authenticating with public key "imported-openssh-key"
IBM_2145:your_cluster_name:superuser>
For single line command mode, you can type the following all on one command line:
C:\Program Files\IBM\svcconsole\cimom>
plink [email protected] lsuser
Authenticating with public key "imported-openssh-key"
id name password ssh_key remote usergrp_id usergrp_name
0 superuser yes yes no 0 SecurityAdmin
1 smith no yes no 4 Monitor
2 jones no yes no 2 CopyOperator
Note: If you are submitting a CLI command with all parameters in single line command mode, you are
challenged upon first appearance of the SSH server host fingerprint. Ensure that the SSH server host
fingerprint is accepted before you submit a batch script file.
The following is an example of the host fingerprint challenge when using plink in single line command
mode:
This task assumes that you have already configured and saved a PuTTY session using the Secure Shell
(SSH) password. If you require command line access without entering a password, use the SSH key pair
that you created for the CLI: Generating an SSH key pair using PuTTY on page 2
Procedure
1. Select Start > Programs > PuTTY > PuTTY. The PuTTY Configuration window opens.
2. Select the name of your saved PuTTY session and click Load.
3. Click Open.
Note: If this is the first time that the PuTTY application is being used since you generated and
uploaded the SSH password or key pair, a PuTTY Security Alert window is displayed. Click Yes to
accept the change and trust the new key.
4. Type the SVC_username in the login as: field and press Enter.
6 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
oss.software.ibm.com/developerworks/projects/openssh
Linux operating systems
The OpenSSH client is installed by default on most Linux distributions. If it is not installed on
your system, consult your Linux installation documentation or visit the following website:
www.openssh.org/portable.html
The OpenSSH client can run on a variety of additional operating systems. For more information
about the openSSH client, visit the following website:
www.openssh.org/portable.html
Authentication to the system generally requires the use of a password, but if there is no password you
can use a key pair. Perform the following steps to set up an RSA key pair on the AIX or Linux host and
the SAN Volume Controller or Storwize V7000 cluster:
Results
Where my_system is the name of the system IP, SVC_username is the user name that you also log into the
system with, and full_path_to_key is the full path to the key file that was generated in the previous step.
Authenticate to the system using a SVC_username and password. (If you require command-line access
without using a password, SSH keys can be used.) The SAN Volume Controller system determines which
user is logging in from the key the user is using.
Note: You can omit -i full_path_to_key if you configure the SSH client to use the key file
automatically. For more information, refer to the OpenSSH documentation.
If you use the Secure Shell (SSH) to log in to the SAN Volume Controller or Storwize V7000, use the
password defined for accessing the GUI. You can also use RSA-based private key authentication.
For more information, see Connecting to the CLI using OpenSSH on page 8.
Perform these steps to set up an RSA key pair on the AIX or Linux host and the SAN Volume Controller
or Storwize V7000 clustered system:
Procedure
1. Create an RSA key pair by issuing a command on the host that is similar to this command:
ssh-keygen -t rsa
Issue:
-i full_path_to_key
to use SSH key.Where my_system is the name of the system IP, full_path_to_key is the full path to the key
file that was generated in the previous step, and SVC_username is the user name that you want to use on
SAN Volume Controller.
Note: You can omit -i full_path_to_key if you configure the SSH client to use the key file
automatically. For more information, refer to the OpenSSH documentation.
You can create two categories of users that access the system. These types are based on how the users are
authenticated to the system. Local users must provide the SVC_username and password, and if you
require command line access without entering a password, a Secure Shell (SSH) key - or both. Local users
are authenticated through the authentication methods that are located on the SAN Volume Controller
system.
If the local user needs access to management GUI, a password is needed for the user. Access to the
command-line interface (CLI) is also possible with the same password or (alternatively) a valid SSH key
can be used. An SSH password is required if a user is working with both interfaces. Local users must be
part of a user group that is defined on the system. User groups define roles that authorize the users
within that group to a specific set of operations on the system.
A remote user is authenticated on a remote service usually provided by a SAN management application,
such as IBM Tivoli Storage Productivity Center, and does not need local authentication methods. For a
remote user, a password (preferred) is required, and if you require command line access without entering
a password an SSH key is required to use the command-line interface.
Remote users only need local credentials to access to the management GUI if the remote service is down.
Remote users have their groups defined by the remote authentication service.
Storwize V7000: You can connect to the system using the same user name with which you log into
Storwize V7000.
Procedure
1. Select User Management > Users .
2. Click New User .
3. Enter the information on the new user and click Create.
8 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 2. Copying the SAN Volume Controller software
upgrade files using PuTTY scp
PuTTY scp (pscp) provides a file transfer application for secure shell (SSH) to copy files either between
two directories on the configuration node or between the configuration node and another host.
To use the pscp application, you must have the appropriate permissions on the source and destination
directories on your respective hosts.
The pscp application is available when you install an SSH client on your host system. You can access the
pscp application through a Microsoft Windows command prompt.
Procedure
1. Start a PuTTY session.
2. Configure your PuTTY session to access your SAN Volume Controller clustered system (system).
3. Save your PuTTY configuration session. For example, you can name your saved session SVCPUTTY.
4. Open a command prompt.
5. Issue this command to set the path environment variable to include the PuTTY directory:
set path=C:\Program Files\putty;%path%
where Program Files is the directory where PuTTY is installed.
6. Issue this command to copy the package onto the node where the CLI runs:
pscp -load saved_putty_configuration
directory_software_upgrade_files/software_upgrade_file_name
username@cluster_ip_address:/home/admin/upgrade
where saved_putty_configuration is the name of the PuTTY configuration session,
directory_software_upgrade_files is the location of the software upgrade files, software_upgrade_file_name
is the name of the software upgrade file, username is the name that you want to use on the SAN
Volume Controller, and cluster_ip_address is an IP address of your clustered system.
If there is insufficient space to store the software upgrade file on the system, the copy process fails.
Perform these steps:
a. Use pscp to copy data that you want to preserve from the /dumps directory. For example, issue
this command to copy all event logs from the system to the IBM System Storage Productivity
Center:
pscp -unsafe -load saved_putty_configuration
username@cluster_ip_address:/dumps/elogs/*
your_preferred_directory
where saved_putty_configuration is the name of the PuTTY configuration session, username is the
name that you want to use on the SAN Volume Controller, cluster_ip_address is the IP address of
your system, and your_preferred_directory is the directory where you want to transfer the event
logs.
b. Issue the cleardumps command to free space on the system:
cleardumps -prefix /dumps
c. Then repeat step 6.
Overview
The CLI commands use the Secure Shell (SSH) connection between the SSH client software on the host
system and the SSH server on the SAN Volume Controller clustered system.
Before you can use the CLI, you must have already created a clustered system.
You must perform these actions to use the CLI from a client system:
v Install and set up SSH client software on each system that you plan to use to access the CLI.
v Authenticate to the system using a password.
v If you require command line access without entering a password, use an SSH public key. Then store
the SSH public key for each SSH client on the SAN Volume Controller.
Note: After the first SSH public key is stored, you can add additional SSH public keys using either the
management GUI or the CLI.
Procedure
1. Issue the showtimezone CLI command to display the current time-zone settings for the clustered
system. The time zone and the associated time-zone ID are displayed.
This task assumes that you have already launched the management GUI.
You can set the System Date and time manually, or by specifying an NTP server:
Procedure
1. Click Manage Systems > Set System Time in the portfolio. The System Date and Time Settings panel
is displayed.
2. To use NTP to manage the clustered system date and time, enter an Internet Protocol Version 4 (IPv4)
address and click Set NTP Server.
Note: If you are using a remote authentication service to authenticate users to the SAN Volume
Controller clustered system, then both the system and the remote service should use the same NTP
server. Consistent time settings between the two systems ensure interactive performance of the
management GUI and correct assignments for user roles.
3. To set the clustered system date and time manually, continue with the following steps.
4. Type your changes into the Date, Month, Year, Hours and Minutes fields and select a new time zone
from the Time Zone list
5. Select Update cluster time and date, Update cluster time zone, or both.
6. Click Update to submit the update request to the clustered system.
SAN Volume Controller provides two license options: Physical Disk Licensing and Capacity Licensing.
Perform the following steps to view and update your SAN Volume Controller license settings:
12 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Procedure
1. Issue the lslicense CLI command to view the current license settings for the clustered system.
2. Issue the chlicense CLI command to change the licensed settings of the clustered system.
Attention:
v License settings are entered when the clustered system is first created; do not update the settings
unless you have changed your license.
v To select Physical Disk Licensing, run the chlicense command with one or more of the
physical_disks, physical_flash, and physical_remote parameters.
v To select Capacity Licensing, run the chlicense command with one or more of the -flash, -remote,
and -virtualization parameters. If the physical disks value is nonzero, these parameters cannot be
set.
Procedure
Issue the lssystem command to display the properties for a clustered system.
The following is an example of the command you can issue:
lssystem -delim : build1
The clustered system (system) superuser password can be reset using the front panel of the configuration
node. To meet varying security requirements, this functionality can be enabled or disabled using the CLI.
Complete the following steps to view and change the status of the password reset feature:
1. Issue the setpwdreset CLI command to view and change the status of the password reset feature for
the SAN Volume Controller front panel.
14 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
2. Record the system superuser password because you cannot access the system without it.
Storwize V7000: The system superuser password can be reset using a USB key. To meet varying security
requirements, this functionality can be enabled or disabled using the CLI. Complete the following steps to
view and change the status of the password reset feature:
1. Issue the setpwdreset CLI command to view and change the status of the password reset feature for
the Storwize V7000.
2. Record the system superuser password because you cannot access the system without it.
Before you add a node to a clustered system, you must make sure that the switchd\ zoning is configured
such that the node being added is in the same zone as all other nodes in the clustered system. If you are
replacing a node and the switch is zoned by worldwide port name (WWPN) rather than by switch port,
make sure that the switch is configured such that the node being added is in the same VSAN/zone.
Attention:
1. If you are re-adding a node to the SAN, ensure that you are adding the node to the same I/O group
from which it was removed. Failure to do this can result in data corruption. You must use the
information that was recorded when the node was originally added to the clustered system. If you do
not have access to this information, call the IBM Support Center to add the node back into the
clustered system without corrupting the data.
2. The LUNs that are presented to the ports on the new node must be the same as the LUNs that are
presented to the nodes that currently exist in the clustered system. You must ensure that the LUNs are
the same before you add the new node to the clustered system.
3. LUN masking for each LUN must be identical on all nodes in a clustered system. You must ensure
that the LUN masking for each LUN is identical before you add the new node to the clustered
system.
4. You must ensure that the model type of the new node is supported by the SAN Volume Controller
software level that is currently installed on the clustered system. If the model type is not supported
by the SAN Volume Controller software level, upgrade the clustered system to a software level that
supports the model type of the new node. See the following website for the latest supported software
levels:
www.ibm.com/storage/support/2145
Applications on the host systems direct I/O operations to file systems or logical volumes that are
mapped by the operating system to virtual paths (vpaths), which are pseudo disk objects supported by
the Subsystem Device Driver (SDD). SDD maintains an association between a vpath and a SAN Volume
Controller volume. This association uses an identifier (UID) which is unique to the volume and is never
reused. The UID permits SDD to directly associate vpaths with volumes.
SDD operates within a protocol stack that contains disk and Fibre Channel device drivers that are used to
communicate with the SAN Volume Controller using the SCSI protocol over Fibre Channel as defined by
If an error occurs, the error recovery procedures (ERPs) operate at various tiers in the protocol stack.
Some of these ERPs cause I/O to be redriven using the same WWNN and LUN numbers that were
previously used.
SDD does not check the association of the volume with the vpath on every I/O operation that it
performs.
Before you add a node to the clustered system, you must check to see if any of the following conditions
are true:
v The clustered system has more than one I/O group.
v The node being added to the clustered system uses physical node hardware or a slot which has
previously been used for a node in the clustered system.
v The node being added to the clustered system uses physical node hardware or a slot which has
previously been used for a node in another clustered system and both clustered systems have visibility
to the same hosts and back-end storage.
If any of the previous conditions are true, the following special procedures apply:
v The node must be added to the same I/O group that it was previously in. You can use the
command-line interface (CLI) command lsnode or the management GUI to determine the WWN of the
clustered system nodes.
v Before you add the node back into the clustered system, you must shut down all of the hosts using the
clustered system. The node must then be added before the hosts are rebooted. If the I/O group
information is unavailable or it is inconvenient to shut down and reboot all of the hosts using the
clustered system, then do the following:
On all of the hosts connected to the clustered system, unconfigure the Fibre Channel adapter device
driver, the disk device driver, and multipathing driver before you add the node to the clustered
system.
Add the node to the clustered system, and then reconfigure the Fibre Channel adapter device driver,
the disk device driver, and multipathing driver.
The following two scenarios describe situations where the special procedures can apply:
v Four nodes of an eight-node clustered system have been lost because of the failure of a pair of 2145
UPS or four 2145 UPS-1U. In this case, the four nodes must be added back into the clustered system
using the CLI command addnode or the management GUI.
Note: You do not need to run the addnode command on a node with a partner that is already in a
clustered system; the clustered system automatically detects an online candidate.
Note: The addnode command is a SAN Volume Controller command. For Storwize V7000, use the
addcontrolenclosure command.
v A user decides to delete four nodes from the clustered system and add them back into the clustered
system using the CLI command addnode or the management GUI.
Note: The addnode command is a SAN Volume Controller command. For Storwize V7000, use the
addcontrolenclosure command.
For 5.1.0 nodes, the SAN Volume Controller automatically re-adds nodes that have failed back to the
clustered system. If the clustered system reports an error for a node missing (error code 1195) and that
16 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
node has been repaired and restarted, the clustered system automatically re-adds the node back into the
clustered system. This process can take up to 20 minutes, so you can manually re-add the node by
completing the following steps:
Procedure
1. Issue the lsnode CLI command to list the nodes that are currently part of the clustered system and
determine the I/O group for which to add the node.
The following is an example of the output that is displayed:
lsnode -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name
:config_node:UPS_unique_id:hardware:iscsi_name:iscsi_alias
:panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1::50050868010050B2:online:0:io_grp0:yes::100:iqn.1986-03.com.ibm
:2145.cluster0.node1::02-1:2:1:123ABCG
2:node2::50050869010050B2:online:0:io_grp0:no::100:iqn.1986-03.com.ibm
:2145.cluster0.node2::02-2:2:2:123ABDG
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name
:config_node:UPS_unique_id:hardware:iscsi_name:iscsi_alias
:panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1::50050868010050B2:online:0:io_grp0:yes::100:iqn.1986-03.com.ibm
:2145.cluster0.node1::02-1:2:1:123ABCG
2:node2::50050869010050B2:online:0:io_grp0:no::100:iqn.1986-03.com.ibm
:2145.cluster0.node2::02-2:2:2:123ABDG
2. Issue the lsnodecandidate CLI command to list nodes that are not assigned to a clustered system and
to verify that a second node is added to an I/O group.
Note: The lsnodecandidate command is a SAN Volume Controller command. For Storwize V7000,
use the lscontrolenclosurecandidate command.
The following is an example of the output that is displayed:
lsnodecandidate -delim :
id:panel_name:UPS_serial_number:UPS_unique_id:hardware
5005076801000001:000341:10L3ASH:202378101C0D18D8:8A4
5005076801000009:000237:10L3ANF:202378101C0D1796:8A4
50050768010000F4:001245:10L3ANF:202378101C0D1796:8A4
....
3. Issue the addnode CLI command to add a node to the clustered system.
Note: The addnode command is a SAN Volume Controller command. For Storwize V7000, use the
addcontrolenclosure command.
Important: Each node in an I/O group must be attached to a different uninterruptible power supply.
The following is an example of the CLI command you can issue to add a node to the clustered system
using the panel name parameter:
addnode -panelname 000237
-iogrp io_grp0
Where 000237 is the panel name of the node, io_grp0 is the name of the I/O group that you are
adding the node to.
The following is an example of the CLI command you can issue to add a node to the clustered system
using the WWNN parameter:
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:
hardware:iscsi_name:iscsi_alias
1:node1:10L3ASH:0000000000000000:offline:0:io_grp0:no:1000000000003206:
8A4:iqn.1986-03.com.ibm:2145.ndihill.node1:
Note: If this command is issued quickly after you have added nodes to the clustered system, the
status of the nodes might be adding. The status is shown as adding if the process of adding the nodes
to the clustered system is still in progress. You do not have to wait for the status of all the nodes to be
online before you continue with the configuration process.
Results
Procedure
1. Issue the lsnode CLI command to display a concise list of nodes in the system.
The following is an example of the CLI command you can issue to list the nodes in the system:
lsnode -delim :
The following is an example of the output that is displayed:
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hardware:iscsi_name:iscsi_alias:
panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1:UPS_Fake_SN:50050768010050B1:online:0:io_grp0:yes:10000000000050B1:8G4:iqn.1986-03.com.ibm:2145.cluster0.node1:000368:::
2. Issue the lsnode CLI command and specify the node ID or name of the node that you want to receive
detailed output.
The following is an example of the CLI command you can issue to list detailed output for a node in
the system:
lsnode -delim : group1node1
18 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Where group1node1 is the name of the node for which you want to view detailed output.
The following is an example of the output that is displayed:
id:1
name:group1node1
UPS_serial_number:10L3ASH
WWNN:500507680100002C
status:online
IO_group_id:0
IO_group_name:io_grp0
partner_node_id:2
partner_node_name:group1node2
config_node:yes
UPS_unique_id:202378101C0D18D8
port_id:500507680110002C
port_status:active
port_speed:2GB
port_id:500507680120002C
port_status:active
port_speed:2GB
port_id:500507680130002C
port_status:active
port_speed:2GB
port_id:500507680140003C
port_status:active
port_speed:2GB
hardware:8A4
iscsi_name:iqn.1986-03.com.ibm:2145.ndihill.node2
iscsi_alias
failover_active:no
failover_name:node1
failover_iscsi_name:iqn.1986-03.com.ibm:2145.ndihill.node1
failover_iscsi_alias
The clustered system (system) automatically discovers the back-end controller and integrates the
controller to determine the storage that is presented to the SAN Volume Controller nodes when back-end
controllers are:
v Added to the Fibre Channel
v Included in the same switch zone as a SAN Volume Controller system
The Small Computer System Interface (SCSI) logical units (LUs) that are presented by the back-end
controller are displayed as unmanaged MDisks. However, if the configuration of the back-end controller
is modified after this has occurred, the SAN Volume Controller system might be unaware of these
configuration changes. You can request that the SAN Volume Controller system rescans the Fibre Channel
SAN to update the list of unmanaged MDisks.
Note: The automatic discovery that is performed by SAN Volume Controller system does not write
anything to an unmanaged MDisk. You must instruct the SAN Volume Controller system to add an
MDisk to a storage pool or use an MDisk to create an image mode volume.
Procedure
1. Issue the detectmdisk CLI command to manually scan the Fibre Channel network. The scan discovers
any new MDisks that might have been added to the system and rebalances MDisk access across the
available controller device ports.
Results
You have now seen that the back-end controllers and switches have been set up correctly and that the
SAN Volume Controller system recognizes the storage that is presented by the back-end controller.
Example
This example describes a scenario where a single back-end controller is presenting eight SCSI LUs to the
SAN Volume Controller system:
1. Issue detectmdisk.
2. Issue lsmdiskcandidate.
This output is displayed:
id
0
1
2
3
4
5
6
7
Attention: If you add an MDisk to a storage pool as an MDisk, any data on the MDisk is lost. If you
want to keep the data on an MDisk (for example, because you want to import storage that was
previously not managed by SAN Volume Controller), you must create image mode volumes instead.
20 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Assume that the clustered system has been set up and that a back-end controller has been configured to
present new storage to the SAN Volume Controller.
If you are using a SAN Volume Controller solid-state drive (SSD) managed disk, ensure that you are
familiar with the SSD configuration rules.
If you intend to keep the volume allocation within one storage system, ensure that all MDisks in the
storage pool are presented by the same storage system.
Ensure that all MDisks that are allocated to a single storage pool are of the same RAID type. If the
storage pool has more than one tier of storage, ensure that all MDisks in the same tier are of the same
RAID type. When using Easy Tier, all of the MDisks in a storage pool in the same tier should be similar
and have similar performance characteristics. If you do not use Easy Tier, the storage pool should contain
only one tier of storage, and all of the MDisks in the storage pool should be similar and have similar
performance characteristics.
Consider the following factors as you decide how many (storage pools) to create:
v A volume can only be created using the storage from one storage pool. Therefore, if you create small
(storage pools), you might lose the benefits that are provided by virtualization, namely more efficient
management of free space and a more evenly distributed workload for better performance.
v If any MDisk in an storage pool goes offline, all the (volumes) in the storage pool go offline. Therefore
you might want to consider using different (storage pools) for different back-end controllers or for
different applications.
v If you anticipate regularly adding and removing back-end controllers or storage, this task is made
simpler by grouping all the MDisks that are presented by a back-end controller into one storage pool.
v All the MDisks in a storage pool should have similar levels of performance or reliability, or both. If a
storage pool contains MDisks with different levels of performance, the performance of the (volumes) in
this group is limited by the performance of the slowest MDisk. If a storage pool contains MDisks with
different levels of reliability, the reliability of the (volumes) in this group is that of the least reliable
MDisk in the group.
Note: When you create a storage pool with a new solid-state drive (SSD), the new SSD is automatically
formatted and set to a block size of 512 bytes.
Even with the best planning, circumstances can change and you must reconfigure your (storage pools)
after they have been created. The data migration facilities that are provided by the SAN Volume
Controller enable you to move data without disrupting I/O.
Consider the following factors as you decide the extent size of each new storage pool:
v You must specify the extent size when you create a new storage pool.
v You cannot change the extent size later; it must remain constant throughout the lifetime of the storage
pool.
v Storage pools can have different extent sizes; however, this places restrictions on the use of data
migration.
v The choice of extent size affects the maximum size of a volume in the storage pool.
Table 5 on page 22 compares the maximum volume capacity for each extent size. The maximum is
different for thin-provisioned volumes. Because the SAN Volume Controller allocates a whole number of
extents to each volume that is created, using a larger extent size might increase the amount of storage
Important: You can specify different extent sizes for different (storage pools); however, you cannot
migrate (volumes) between (storage pools) with different extent sizes. If possible, create all your (storage
pools) with the same extent size.
Procedure
where maindiskgroup is the name of the storage pool that you want to create, 32 MB is the size of the
extent you want to use, and mdsk0, mdsk1, mdsk2, mdsk3 are the names of the four MDisks that you want
to add to the group.
Results
Example
The following example provides a scenario where you want to create a storage pool, but you do not have
any MDisks available to add to the group. You plan to add the MDisks at a later time. You use the
mkmdiskgrp CLI command to create the storage pool bkpmdiskgroup and later used the addmdisk CLI
command to add mdsk4, mdsk5, mdsk6, mdsk7 to the storage pool.
1. Issue mkmdiskgrp -name bkpmdiskgroup -ext 32
where bkpmdiskgroup is the name of the storage pool that you want to create and 32 MB is the size of
the extent that you want to use.
2. You find four MDisks that you want to add to the storage pool.
3. Issue addmdisk -mdisk mdsk4:mdsk5:mdsk6:mdsk7 bkpdiskgroup
22 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
where mdsk4, mdsk5, mdsk6, mdsk7 are the names of the MDisks that you want to add to the storage
pool and bkpdiskgroup is the name of the storage pool for which you want to add MDisks.
The MDisks must be in unmanaged mode. Disks that already belong to a storage pool cannot be added
to another storage pool until they have been deleted from their current storage pool. An MDisk can be
deleted from a storage pool under these circumstances:
v If the MDisk does not contain any extents in use by a virtual disk volume
v If you can first migrate the extents in use onto other free extents within the group
Important: Do not add an MDisk using this procedure if you are mapping the MDisk to an image mode
volume. Adding an MDisk to a storage pool enables the SAN Volume Controller to write new data to the
MDisk; therefore, any existing data on the MDisk is lost. If you want to create an image mode volume,
use the mkvdisk command instead of addmdisk.
If you are using a SAN Volume Controller solid-state drive (SSD) managed disk, ensure that you are
familiar with the SSD configuration rules.
The SAN Volume Controller performs tests on the MDisks in the list before the MDisks are allowed to
become part of a storage pool when:
v Adding MDisks to a storage pool using the addmdisk command
v Creating a storage pool using the mkmdiskgrp -mdisk command
These tests include checks of the MDisk identity, capacity, status and the ability to perform both read and
write operations. If these tests fail or exceed the time allowed, the MDisks are not added to the group.
However, with the mkmdiskgrp -mdisk command, the storage pool is still created even if the tests fail, but
it does not contain any MDisks. If tests fail, confirm that the MDisks are in the correct state and that they
have been correctly discovered.
Note: The first time that you add a new solid-state drive (SSD) to a storage pool, the SSD is
automatically formatted and set to a block size of 512 bytes.
2. Issue the addmdisk CLI command to add MDisks to the storage pool.
This is an example of the CLI command you can issue to add MDisks to a storage pool:
addmdisk -mdisk mdisk4:mdisk5:mdisk6:mdisk7 bkpmdiskgroup
Where mdisk4:mdisk5:mdisk6:mdisk7 are the names of the MDisks that you want to add to the storage
pool and bkpmdiskgroup is the name of the storage pool for which you want to add the MDisks.
Note: Quorum functionality is not supported for internal drives on SAN Volume Controller nodes.
To set an MDisk as a quorum disk, use the chquorum command. Storwize V7000: To set an external
MDisk as a quorum disk, use the chquorum command.
When setting an MDisk as a quorum disk, keep the following recommendations in mind:
v When possible, distribute the quorum candidate disks so that each MDisk is provided by a different
storage system. For a list of storage systems that support quorum disks, search for supported hardware
list at the following website:
www.ibm.com/storage/support/2145
v Before you set the quorum disk with the chquorum command, use the lsquorum command to ensure
that the MDisk you want is online.
Storwize V7000: Quorum disk configuration describes how quorum disks are used by the system, and how
they are selected. The system automatically assigns quorum disks. Do not override the quorum disk
assignment if you have a Storwize V7000 without external MDisks. For a Storwize V7000 with more than
one control enclosure and with external MDisks, distribute the quorum candidate disks (when possible)
so that each MDisk is provided by a different storage system. For a list of storage systems that support
quorum disks, search for supported hardware list at the following website:
www.ibm.com/storage/support/2145
24 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
About this task
Table 6 provides an example of the amount of memory that is required for Volume Mirroring and each
Copy Service feature.
Table 6. Memory required for Volume Mirroring and Copy Services
1 MB of memory provides the following volume
Feature Grain size capacity for the specified I/O group
Metro Mirror or Global Mirror 256 KB 2 TB of total Metro Mirror and Global Mirror volume
capacity
FlashCopy 256 KB 2 TB of total FlashCopy source volume capacity
FlashCopy 64 KB 512 GB of total FlashCopy source volume capacity
Incremental FlashCopy 256 KB 1 TB of total incremental FlashCopy source volume
capacity
Incremental FlashCopy 64 KB 256 GB of total incremental FlashCopy source volume
capacity
Volume Mirroring 256 KB 2 TB of mirrored volume capacity
Notes:
1. For multiple FlashCopy targets, you must consider the number of mappings. For example, for a mapping with a
grain size of 256 KB, 8 KB of memory allows one mapping between a 16 GB source volume and a 16 GB target
volume. Alternatively, for a mapping with a 256 KB grain size, 8 KB of memory allows two mappings between
one 8 GB source volume and two 8 GB target volumes.
2. When creating a FlashCopy mapping, if you specify an I/O group other than the I/O group of the source
volume, the memory accounting goes towards the specified I/O group, not towards the I/O group of the source
volume.
3. For Volume Mirroring, the full 512 MB of memory space provides 1 PB of total mirroring capacity.
4. In this table, capacity refers to the virtual capacity of the volume. For thin-provisioned volumes with different
virtual capacities and real capacities, the virtual capacity is used for memory accounting.
Table 7 provides an example of RAID level comparisons with their bitmap memory cost, where MS is the
size of the member drives and MC is the number of member drives.
Table 7. RAID level comparisons
Approximate bitmap memory
Level Member count Approximate capacity Redundancy cost
RAID-0 1-8 MC * MS None (1 MB per 2 TB of MS) * MC
RAID-1 2 MS 1 (1 MB per 2 TB of MS) *
(MC/2)
RAID-5 3-16 (MC-1) * MS 1 1 MB per 2 TB of MS with a
strip size of 256 KB; double
RAID-6 5-16 less than (MC-2 * MS) 2
with strip size of 128 KB.
RAID-10 2-16 (evens) MC/2 * MS 1 (1 MB per 2 TB of MS) *
(MC/2)
Note: There is a margin of error on the approximate bitmap memory cost of approximately 15%. For example, the
cost for a 256 KB RAID-5 is ~1.15 MB for the first 2 TB of MS.
To modify and verify the amount of memory that is available, perform the following steps:
If the volume that you are creating maps to a solid-state drive (SSD), the data that is stored on the
volume is not protected against SSD failures or node failures. To avoid data loss, add a volume copy that
maps to an SSD on another node.
This task assumes that the clustered system has been set up and that you have created storage pools. You
can establish an empty storage pool to hold the MDisks that are used for image mode volumes.
Note: If you want to keep the data on an MDisk, create image mode (volumes). This task describes how
to create a volume with striped virtualization.
Procedure
1. Issue the lsmdiskgrp CLI command to list the available storage pools and the amount of free storage
in each group.
The following is an example of the CLI command you can issue to list storage pools:
lsmdiskgrp -delim :
The following is an example of the output that is displayed:
26 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_capacity:
used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_status
0:mdiskgrp0:degraded:4:0:34.2GB:16:34.2GB:0:0:0:0:0:auto:inactive
1:mdiskgrp1:online:4:6:200GB:16:100GB:400GB:75GB:100GB:200:80:on:active
2. Decide which storage pool you want to provide the storage for the volume.
3. Issue the lsiogrp CLI command to show the I/O groups and the number of volumes assigned to each
I/O group.
Note: It is normal for clustered systems with more than one I/O group to have storage pools that
have volumes in different I/O groups. You can use FlashCopy to make copies of volumes regardless
of whether the source and target volume are in the same I/O group. If you plan to use intra-clustered
system Metro Mirror or Global Mirror, both the master and auxiliary volume must be in the same I/O
group.
The following is an example of the CLI command you can issue to list I/O groups:
lsiogrp -delim :
The following is an example of the output that is displayed:
id:name:node_count:vdisk_count:host_count
0:io_grp0:2:0:2
1:io_grp1:2:0:1
2:io_grp2:0:0:0
3:io_grp3:0:0:0
4:recovery_io_grp:0:0:0
4. Decide which I/O group you want to assign the volume to. This determines which SAN Volume
Controller nodes in the clustered system process the I/O requests from the host systems. If you have
more than one I/O group, make sure you distribute the volumes between the I/O groups so that the
I/O workload is shared evenly between all SAN Volume Controller nodes.
5. Issue the mkvdisk CLI command to create a volume.
The rate at which the volume copies will resynchronize after loss of synchronization can be specified
using the syncrate parameter. The following table defines the rates:
Table 8. Volume copy resynchronization rates
Syncrate value Data copied per second
1-10 128 KB
11-20 256 KB
21-30 512 KB
31-40 1 MB
41-50 2 MB
51-60 4 MB
61-70 8 MB
71-80 16 MB
81-90 32 MB
91-100 64 MB
The default setting is 50. The synchronization rate must be set such that the volume copies will
resynchronize quickly after loss of synchronization.
The following is an example of the CLI command that you can issue to create a volume with two
copies using the I/O group and storage pool name and specifying the synchronization rate:
mkvdisk -iogrp io_grp1 -mdiskgrp grpa:grpb -size500 -vtype striped
-copies 2 syncrate 90
Note: If you want to create two volume copies of different types, create the first copy using the
mkvdisk command and then add the second copy using the addvdiskcopy command.
6. Issue the lsvdisk CLI command to list all the volumes that have been created.
The addvdiskcopy command adds a copy to an existing volume, which changes a nonmirrored volume
into a mirrored volume.
Creating mirrored copies of a volume allows the volume to remain accessible even when a managed disk
(MDisk) that the volume depends on becomes unavailable. You can create copies of a volume either from
different storage pools or by creating an image mode copy of the volume. Copies allow for availability of
data; however, they are not separate objects. You can only create or change mirrored copies from the
volume.
28 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
In addition, you can use volume mirroring as an alternative method of migrating volumes between
storage pools. For example, if you have a nonmirrored volume in one storage pool and want to migrate
that volume to a second storage pool, you can add a new copy of the volume by specifying the second
storage pool for that volume copy. After the copies have synchronized, you can delete the copy in the
first storage pool. The volume is migrated to the second storage pool while remaining online during the
migration.
Use the -copies parameter to specify the number of copies to add to the volume; this is currently limited
to the default value of 1 copy. Use the -mdiskgrp parameter to specify the managed disk group that will
provide storage for the copy; the lsmdiskgrp CLI command lists the available managed disk groups and
the amount of available storage in each group.
For image copies, you must specify the virtualization type using the -vtype parameter and an MDisk that
is in unmanaged mode using the -mdisk parameter. This MDisk must be in the unmanaged mode. The
-vtype parameter is optional for sequential (seq) and striped volumes. The default virtualization type is
striped.
Use the syncrate parameter to specify the rate at which the volume copies will resynchronize after loss of
synchronization. The topic that describes creating volumes using the CLI describes this parameter.
where 0 is the name of the managed disk group and vdisk8 is the volume to which the copy will be
added.
The command returns the IDs of the newly created volume copies.
If you are using solid-state drives (SSDs) that are inside a SAN Volume Controller node, always use
volume mirroring with these SSDs. Data stored on SSDs inside SAN Volume Controller nodes is not
protected against SSD failures or node failures. Therefore, if you are deleting a volume copy that uses a
SSD, ensure that the data that is stored on the copy is protected with another volume copy.
Important: Using the -force parameter might result in a loss of access. Use it only under the direction of
the IBM Support Center.
Issue the rmvdiskcopy CLI command to delete a mirrored copy from a volume:
rmvdiskcopy -copy 1 vdisk8
where 1 is the ID of the copy to delete and vdisk8 is the virtual disk to delete the copy from.
If you are configuring a host object on a Fibre Channel attached host, ensure that you have completed all
zone and switch configuration. Also test the configuration to ensure that zoning was created correctly.
If you are configuring a host object on the cluster that uses iSCSI connections, ensure that you have
completed the necessary host-system configurations and have configured the cluster for iSCSI
connections.
Procedure
1. Issue the mkhost CLI command to create a logical host object for a Fibre Channel attached host.
Assign your worldwide port name (WWPN) for the host bus adapters (HBAs) in the hosts.
The following is an example of the CLI command that you can issue to create a Fibre Channel attached host:
mkhost -name new_name -hbawwpn wwpn_list
where new_name is the name of the host and wwpn_list is the WWPN of the HBA.
Note: For more information about worldwide port names: Determining the WWPNs of a node using the CLI on
page 44.
where iscsi_name_list specifies one or more iSCSI qualified names (IQNs) of this host. Up to 16 names can be
specified, provided that the command-line limit is not reached. Each name should comply with the iSCSI standard,
RFD 3720.
30 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
3. To add ports to a Fibre Channel attached host, issue the addhostport CLI command.
This command adds another HBA WWPN wwpn_list to the host that was created in step 1 on page 30.
where iscsi_name_list specifies the comma-separated list of IQNs to add to the host. This command adds an IQN to
the host that was created in step 2 on page 30.
5. To set the Challenge Handshake Authentication Protocol (CHAP) secret that is used to authenticate
the host for iSCSI I/O, issue the chhost CLI command. This secret is shared between the host and the
cluster. For example, issue the following CLI command:
chhost -chapsecret chap_secret
where chap_secret is the CHAP secret that is used to authenticate the host for iSCSI I/O. To list the
CHAP secret for each host, use the lsiscsiauth command. To clear any previously set CHAP secret
for a host, use the chhost -nochapsecret command.
What to do next
After you have created the host object on the cluster, you can map volumes to a host.
If you are unable to discover the disk on the host system or if there are fewer paths available for each
disk than expected, test the connectivity between your host system and the cluster. Depending on the
connection type to the host, these steps might be different. For iSCSI-attached hosts, test your
connectivity between the host and SAN Volume Controller ports by pinging SAN Volume Controller from
the host. Ensure that the firewall and router settings are configured correctly and validate that the values
for the subnet mask and gateway are specified correctly for the SAN Volume Controller host
configuration.
For Fibre Channel attached hosts, ensure that the active switch configuration includes the host zone and
check the host-port link status. To verify end-to-end connectivity, you can use the lsfabric CLI command
or the View Fabric panel under Service and Maintenance container in the management GUI.
Procedure
1. Issue the mkvdiskhostmap CLI command to create host mappings.
This example is a CLI command you can issue to create host mappings:
mkvdiskhostmap -host demohost1 mainvdisk1
Where demohost1 is the name of the host and mainvdisk1 is the name of the volume.
2. After you have mapped volumes to hosts, discover the disks on the host system. This step requires
that you access the host system and use the host-system utilities to discover the new disks that are
A FlashCopy mapping specifies the source and target virtual disk (VDisk) (volume). Source VDisks
(volumes) and target VDisks (volumes) must meet these requirements:
v They must be the same size
v They must be managed by the same clustered system
A VDisk (volume) can be the source in up to 256 mappings. A mapping is started at the point in time
when the copy is required.
Procedure
1. The source and target VDisk (volume) must be the exact same size. Issue the lsvdisk -bytes CLI
command to find the size (capacity) of the VDisk (volume) in bytes.
2. Issue the mkfcmap CLI command to create a FlashCopy mapping.
This CLI command example creates a FlashCopy mapping and sets the copy rate:
mkfcmap -source mainvdisk1 -target bkpvdisk1
-name main1copy -copyrate 75
Where mainvdisk1 is the name of the source VDisk (volume), bkpvdisk1 is the name of the VDisk
(volume) that you want to make the target VDisk (volume), main1copy is the name that you want to
call the FlashCopy mapping, and 75 is the copy rate.
This is an example of the CLI command you can issue to create FlashCopy mappings without the
copy rate parameter:
mkfcmap -source mainvdisk2 -target bkpvdisk2
-name main2copy
Where mainvdisk2 is the name of the source VDisk (volume), bkpvdisk2 is the name of the VDisk
(volume) that you want to make the target VDisk (volume), main2copy is the name that you want to
call the FlashCopy mapping.
Note: The default copy rate of 50 is used if you do not specify a copy rate.
If the specified source and target VDisks (volume) are also the target and source VDisks (volumes) of
an existing mapping, the mapping that is being created and the existing mapping become partners. If
one mapping is created as incremental, its partner is automatically incremental. A mapping can have
only one partner.
3. Issue the lsfcmap CLI command to check the attributes of the FlashCopy mappings that have been
created:
This is an example of a CLI command that you can issue to view the attributes of the FlashCopy
mappings:
lsfcmap -delim :
Where -delim specifies the delimiter. This is an example of the output that is displayed:
32 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:
group_id:group_name:status:progress:copy_rate:clean_progress:incremental
0:main1copy:77:vdisk77:78:vdisk78:::idle_or_copied:0:75:100:off
1:main2copy:79:vdisk79:80:vdisk80:::idle_or_copied:0:50:100:off
Starting a FlashCopy mapping creates a point-in-time copy of the data on the source virtual disk (VDisk)
and writes it to the target VDisk (volume) for the mapping.
Procedure
1. Issue the prestartfcmap CLI command to prepare the FlashCopy mapping.
To run the following command, the FlashCopy mapping cannot belong to a consistency group.
prestartfcmap -restore main1copy
Where main1copy is the name of the FlashCopy mapping.
This command specifies the optional restore parameter, which forces the mapping to be prepared
even if the target VDisk is being used as a source in another active FlashCopy mapping.
The mapping enters the preparing state and moves to the prepared state when it is ready.
2. Issue the lsfcmap CLI command to check the state of the mapping.
The following is an example of the output that is displayed:
lsfcmap -delim :
id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:
target_vdisk_name:group_id:group_name:status:progress:copy_rate
0:main1copy:0:mainvdisk1:1:bkpvdisk1:::prepared:0:50
Results
You have created a point-in-time copy of the data on a source VDisk and written that data to a target
VDisk. The data on the target VDisk is only recognized by the hosts that are mapped to it.
Procedure
1. To stop a FlashCopy mapping, issue the following stopfcmap command:
stopfcmap fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the mapping to stop.
2. To stop immediately all processing associated with the mapping and break the dependency on the
source VDisk (volume) of any mappings that are also dependent on the target disk, issue the
following command:
stopfcmap -force -split fc_map_id or fc_map_name
When you use the force parameter, all FlashCopy mappings that depend on this mapping (as listed
by the lsfcmapdependentmaps command) are also stopped.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
The split parameter can be specified only when stopping a map that has a progress of 100 as shown
by the lsfcmap command. The split parameter removes the dependency of any other mappings on
the source volume. It might be used prior to starting another FlashCopy mapping whose target disk is
the source disk of the mapping being stopped. After the mapping is stopped with the split option,
you can start the other mapping without the restore option.
The rmfcmap CLI command deletes an existing mapping if the mapping is in the idle_or_copied or
stopped state. If it is in the stopped state, the force parameter is required to specify that the target VDisk
(volume) is brought online. If the mapping is in any other state, you must stop the mapping before you
can delete it.
If deleting the mapping splits the tree that contains the mapping, none of the mappings in either
resulting tree can depend on any mapping in the other tree. To display a list of dependent FlashCopy
mappings, use the lsfcmapdependentmaps command.
Procedure
1. To delete an existing mapping, issue the rmfcmap CLI command:
rmfcmap fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the mapping to delete.
2. To delete an existing mapping and bring the target VDisk online, issue the following command:
rmfcmap -force fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the mapping to delete.
34 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Results
If you have created several FlashCopy mappings for a group of virtual disks (volumes) that contain
elements of data for the same application, it can be convenient to assign these mappings to a single
FlashCopy consistency group. You can then issue a single prepare or start command for the whole group.
For example, you can copy all of the files for a database at the same time.
Perform the following steps to add FlashCopy mappings to a new FlashCopy consistency group:
Procedure
1. Issue the mkfcconsistgrp CLI command to create a FlashCopy consistency group.
The following is an example of the CLI command you can issue to create a FlashCopy consistency
group:
mkfcconsistgrp -name FCcgrp0 -autodelete
Where FCcgrp0 is the name of the FlashCopy consistency group. The -autodelete parameter specifies
to delete the consistency group when the last FlashCopy mapping is deleted or removed from the
consistency group.
2. Issue the lsfcconsistgrp CLI command to display the attributes of the group that you have created.
The following is an example of the CLI command you can issue to display the attributes of a
FlashCopy consistency group:
lsfcconsistgrp -delim : FCcgrp0
The following is an example of the output that is displayed:
id:1
name:FCcgrp0
status:idle_or_copied
autodelete:on
FC_mapping_id:0
FC_mapping_name:fcmap0
FC_mapping_id:1
FC_mapping_name:fcmap1
Note: For any group that has just been created, the status reported is empty
3. Issue the chfcmap CLI command to add FlashCopy mappings to the FlashCopy consistency group:
The following are examples of the CLI commands you can issue to add Flash Copy mappings to the
FlashCopy consistency group:
chfcmap -consistgrp FCcgrp0 main1copy
chfcmap -consistgrp FCcgrp0 main2copy
Where FCcgrp0 is the name of the FlashCopy consistency group and main1copy, main2copy are the
names of the FlashCopy mappings.
4. Issue the lsfcmap CLI command to display the new attributes of the FlashCopy mappings.
The following is an example of the output that is displayed:
5. Issue the lsfcconsistgrp CLI command to display the detailed attributes of the group.
The following is an example of a CLI command that you can issue to display detailed attributes:
lsfcconsistgrp -delim : FCcgrp0
Where FCcgrp0 is the name of the FlashCopy consistency group, and -delim specifies the delimiter.
The following is an example of the output that is displayed:
id:1
name:FCcgrp0
status:idle_or_copied
autodelete:off
FC_mapping_id:0
FC_mapping_name:main1copy
FC_mapping_id:1
FC_mapping_name:main2copy
Successful completion of the FlashCopy process creates a point-in-time copy of the data on the source
virtual disk (VDisk) and writes it to the target VDisk (volume) for each mapping in the group. When
several mappings are assigned to a FlashCopy consistency group, only a single prepare command is
issued to prepare every FlashCopy mapping in the group; only a single start command is issued to start
every FlashCopy mapping in the group.
Perform the following steps to prepare and start a FlashCopy consistency group:
Procedure
1. Issue the prestartfcconsistgrp CLI command to prepare the FlashCopy consistency group. This
command must be issued before the copy process can begin.
Remember: A single prepare command prepares all of the mappings simultaneously for the entire
group.
An example of the CLI command issued to prepare the FlashCopy consistency group:
prestartfcconsistgrp -restore maintobkpfcopy
Where maintobkpfcopy is the name of the FlashCopy consistency group.
The optional restore parameter forces the consistency group to be preparedeven if the target
volume is being used as a source volume in another active mapping. An active mapping is in the
copying, suspended, or stopping state. The group enters the preparing state, and then moves to the
prepared state when it is ready.
2. Issue the lsfcconsistgrp command to check the status of the FlashCopy consistency group.
An example of the CLI command issued to check the status of the FlashCopy consistency group.
lsfcconsistgrp -delim :
An example of the output displayed:
36 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
id:name:status
1:maintobkpfcopy:prepared
3. Issue the startfcconsistgrp CLI command to start the FlashCopy consistency group to make the copy.
Remember: A single start command starts all the mappings simultaneously for the entire group.
An example of the CLI command issued to start the FlashCopy consistency group mappings:
startfcconsistgrp -prep -restore maintobkpfcopy
Where maintobkpfcopy is the name of the FlashCopy consistency group.
Include the prep parameter, and the system automatically issues the prestartfcconsistgrp command
for the specified group.
Note: Combining the restore parameter with the prep parameter, force-starts the consistency group.
This occurs even if the target volume is being used as a source volume in another active mapping. An
active mapping is in the copying, suspended, or stopping state.
The FlashCopy consistency group enters the copying state and returns to the idle_copied state when
complete.
4. Issue the lsfcconsistgrp command to check the status of the FlashCopy consistency group.
An example of the CLI command issued to check the status of the FlashCopy consistency group:
lsfcconsistgrp -delim : maintobkpfcopy
Where maintobkpfcopy is the name of the FlashCopy consistency group.
An example of the output displayed during the copying process:
id:name:status
1:maintobkpfcopy:copying
The stopfcconsistgrp CLI command stops all processing that is associated with a FlashCopy consistency
group that is in one of the following processing states: prepared, copying, stopping, or suspended.
Procedure
1. To stop a FlashCopy consistency group, issue the stopfcconsistgrp CLI command:
stopfcconsistgrp fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the mapping to delete.
2. To stop a consistency group and break the dependency on the source VDisks of any mappings that
are also dependent on the target VDisk, issue the following command:
Results
The rmfcconsistgrp CLI command deletes an existing FlashCopy consistency group. The force parameter
is required only when the consistency group that you want to delete contains mappings.
Important: Using the force parameter might result in a loss of access. Use it only under the direction of
the IBM Support Center.
Procedure
1. To delete an existing consistency group that does not contain mappings, issue the rmfcconsistgrp CLI
command:
rmfcconsistgrp fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the consistency group to delete.
2. To delete an existing consistency group that contains mappings that are members of the consistency
group, issue the following command:
rmfcconsistgrp -force fc_map_id or fc_map_name
where fc_map_id or fc_map_name is the ID or name of the mapping to delete.
All the mappings that are associated with the consistency group are removed from the group and
changed to stand-alone mappings. To delete a single mapping in the consistency group, you must use
the rmfcmap command.
Results
Procedure
1. To create a Metro Mirror relationship, run the mkrcrelationship command. For example, enter:
mkrcrelationship -master master_vdisk_id
-aux aux_vdisk_id -cluster cluster_id
38 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Where master_vdisk_id is the ID of the master volume, aux_vdisk_id is the ID of the auxiliary volume,
and system_id is the ID of the remote clustered system.
2. To create a new Global Mirror relationship, run the mkrcrelationship command with the -global
parameter. For example, enter either one of the following commands:
Where master_vdisk_id is the ID of the master volume, aux_vdisk_id is the ID of the auxiliary volume,
and system_id is the ID of the remote system.
3. To create a new relationship with cycling enabled:
mkrcrelationship -global -cycling multi
To modify Metro Mirror or Global Mirror relationships, run the chrcrelationship command.
Procedure
1. Run the chrcrelationship command to change the name of a Metro Mirror or Global Mirror
relationship. For example, to change the relationship name, enter:
chrcrelationship -name new_rc_rel_name previous_rc_rel_name
Where new_rc_rel_name is the new name of the relationship and previous_rc_rel_name is the previous
name of the relationship.
2. Run the chrcrelationship command to remove a relationship from whichever consistency group it is a
member of. For example, enter:
chrcrelationship -force -noconsistgrp rc_rel_name/id
Where rc_rel_name/id is the name or ID of the relationship.
For a full list of applicable options, refer to chrcrelationship on page 379.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
To start and stop Metro Mirror or Global Mirror relationships, perform these steps:
Procedure
1. To start a Metro Mirror or Global Mirror relationship, run the startrcrelationship command. For
example, enter:
startrcrelationship rc_rel_id
Where rc_rel_id is the ID of the relationship that you want to start in a stand-alone relationship.
To display the progress of the background copy of Metro Mirror or Global Mirror relationships, run the
lsrcrelationshipprogress command.
Procedure
1. To display data progress without headings for columns of data or for each item of data in a Metro
Mirror or Global Mirror relationship, run the lsrcrelationshipprogress -nohdr command. For example,
to display data of the relationship with headings suppressed, enter:
lsrcrelationshipprogress -nohdr rc_rel_name
Where rc_rel_name is the name of the specified object type.
2. To display the progress of a background copy of a Metro Mirror or Global Mirror relationship as a
percentage, run the lsrcrelationshipprogress -delim command. The colon character (:) separates all
items of data in a concise view, and the spacing of columns does not occur. In a detailed view, the
data is separated from its header by the specified delimiter. For example, enter:
lsrcrelationshipprogress -delim : 0
The resulting output is displayed, such as in this example:
id:progress
0:58
To switch the roles of primary and secondary volumes in Metro Mirror or Global Mirror relationships,
follow these steps:
Procedure
1. To make the master disk in a Metro Mirror or Global Mirror relationship to be the primary, run the
switchrcrelationship -primary master command. For example, enter:
switchrcrelationship -primary master rc_rel_id
Where rc_rel_id is the ID of the relationship to switch.
2. To make the auxiliary disk in a Metro Mirror or Global Mirror relationship to be the primary, run the
switchrcrelationship -primary aux command. For example, enter:
40 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
switchrcrelationship -primary aux rc_rel_id
Where rc_rel_id is the ID of the relationship to switch.
Remember:
v You cannot switch a global relationship if cycling is (automatically) set.
v To switch the direction of a multi cycling mode-based relationship, the relationship must stop with
access enabled. Then, start by using -force in the opposite direction. (Using the force parameter
might result in a loss of access. Use it only under the direction of the IBM Support Center.)
Deleting Metro Mirror and Global Mirror relationships using the CLI
You can use the command-line interface (CLI) to delete Metro Mirror and Global Mirror relationships.
Procedure
To delete Metro Mirror and Global Mirror relationships, run the rmrcrelationship command. For
example, enter:
rmrcrelationship rc_rel_name/id
To create Metro Mirror or Global Mirror consistency groups, perform these steps:
Procedure
1. To create a Metro Mirror or Global Mirror consistency group, run the mkrcconsistgrp command. For
example, enter:
mkrcconsistgrp -name new_name -cluster cluster_id
where new_name is the name of the new consistency group and cluster_id is the ID of the remote
cluster for the new consistency group. If -cluster is not specified, a consistency group is created only
on the local cluster. The new consistency group does not contain any relationships and will be in the
empty state.
2. To add Metro Mirror or Global Mirror relationships to the group, run the chrcrelationship command.
For example, enter:
chrcrelationship -consistgrp consist_group_name rc_rel_id
where consist_group_name is the name of the new consistency group to assign the relationship to and
rc_rel_id is the ID of the relationship.
To assign or modify the name of a Metro Mirror or Global Mirror consistency group, run the
chrcconsistgrp command.
Procedure
1. Run the chrcconsistgrp command to assign a new name of a Metro Mirror or Global Mirror
consistency group. For example, enter:
chrcconsistgrp -name new_name_arg
Where new_name_arg is the assigned new name of the consistency group.
2. Run the chrcconsistgrp command to change the name of the consistency group. For example, enter:
chrcconsistgrp -name new_consist_group_name previous_consist_group_name
Where new_consist_group_name is the assigned new name of the consistency group and
previous_consist_group_name is the previous name of the consistency group.
To start and stop Metro Mirror or Global Mirror consistency-group copy processes, perform these steps:
Procedure
1. To start a Metro Mirror or Global Mirror consistency-group copy process, set the direction of copy if it
is undefined and optionally mark the secondary VDisks of the consistency group as clean. Run the
startrcconsistgrp command. For example, enter:
startrcconsistgrp rc_consist_group_id
Where rc_consist_group_id is the ID of the consistency group to start processing.
2. To stop the copy process for a Metro Mirror or Global Mirror consistency group, run the
stoprcconsistgrp command.
For example, enter:
stoprcconsistgrp rc_consist_group_id
Where rc_consist_group_id is the ID of the consistency group that you want to stop processing.
If the group is in a consistent state, you can also use this command to enable write access to the
secondary virtual disks (VDisks) in the group.
To delete existing Metro Mirror or Global Mirror consistency groups, follow these steps:
Procedure
1. To delete a Metro Mirror or Global Mirror consistency group, run the rmrcconsistgrp command. For
example, enter:
rmrcconsistgrp rc_consist_group_id
42 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Where rc_consist_group_id is the ID of the consistency group to delete.
2. If a Metro Mirror or Global Mirror consistency group is not empty, you must use the -force
parameter to delete the consistency group. For example, enter:
rmrcconsistgrp -force rc_consist_group_id
Where rc_consist_group_id is the ID of the consistency group to delete. This command causes all
relationships that are members of the deleted group to become stand-alone relationships.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
Creating Metro Mirror and Global Mirror partnerships using the CLI
You can use the command-line interface (CLI) to create Metro Mirror and Global Mirror partnerships
between two clusters.
Perform the following steps to create Metro Mirror and Global Mirror partnerships:
Procedure
1. To create Metro Mirror and Global Mirror partnerships, run the mkpartnership command. For
example, enter:
mkpartnership -bandwidth bandwidth_in_mbps remote_cluster_id
where bandwidth_in_mbps specifies the bandwidth (in megabytes per second) that is used by the
background copy process between the clusters and remote_cluster_id is the ID of the remote cluster.
2. Run the mkpartnership command from the remote cluster. For example, enter:
mkpartnership -bandwidth bandwidth_in_mbps local_cluster_id
where bandwidth_in_mbps specifies the bandwidth (in megabytes per second) that is used by the
background copy process between the clusters and local_cluster_id is the ID of the local cluster.
Modifying Metro Mirror and Global Mirror partnerships using the CLI
You can use the command-line interface (CLI) to modify Metro Mirror and Global Mirror partnerships.
The partnership bandwidth, which is also known as background copy, controls the rate at which data is
sent from the local clustered system to the remote clustered system. The partnership bandwidth can be
changed to help manage the use of intersystem links. It is measured in megabytes per second (MBps).
Perform the following steps to modify Metro Mirror and Global Mirror partnerships:
Procedure
1. To modify Metro Mirror and Global Mirror partnerships, run the chpartnership command. For
example, enter:
chpartnership -bandwidth bandwidth_in_mbps remote_cluster_id
where bandwidth_in_mbps is the new bandwidth (in megabytes per second) from the local clustered
system to the remote clustered system, and remote_cluster_id is the ID of the remote system.
2. Run the chpartnership command from the remote clustered system. For example, enter:
chpartnership -bandwidth bandwidth_in_mbps local_cluster_id
where bandwidth_in_mbps is the new bandwidth (in megabytes per second) from the remote clustered
system to the local clustered system, and local_cluster_id is the ID of the local system.
Perform the following steps to start and stop Metro Mirror and Global Mirror partnerships:
Procedure
1. To start a Metro Mirror or Global Mirror partnership, run the chpartnership command from either
cluster. For example, enter:
chpartnership -start remote_cluster_id
Where remote_cluster_id is the ID of the remote cluster. The mkpartnership command starts the
partnership by default.
2. To stop a Metro Mirror or Global Mirror partnership, run the chpartnership command from either
cluster.
For example, enter:
chpartnership -stop remote_cluster_id
Where remote_cluster_id is the ID of the remote cluster.
Deleting Metro Mirror and Global Mirror partnerships using the CLI
You can use the command-line interface (CLI) to delete Metro Mirror and Global Mirror partnerships.
Perform the following steps to delete Metro Mirror and Global Mirror partnerships:
Procedure
1. If a Metro Mirror or Global Mirror partnership has configured relationships or groups, you must stop
the partnership before you can delete it. For example, enter:
chpartnership -stop remote_cluster_id
Where remote_cluster_id is the ID of the remote cluster.
2. To delete a Metro Mirror and Global Mirror partnership, run the rmpartnership command from either
cluster. For example, enter:
rmpartnership remote_cluster_id
Where remote_cluster_id is the ID of the remote cluster.
Procedure
1. Issue the lsnode CLI command to list the nodes in the clustered system.
2. Record the name or ID of the node for which you want to determine the WWPNs.
3. Issue the lsportfc CLI command and specify the node name or ID that was recorded in step 2.
44 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
The following is an example of the CLI command you can issue:
lsportfc -filtervalue node_id=2
Where node_id=2 is the name of the node for which you want to determine the WWPNs. The
following is the output from the command:
id fc_io_port_idport_id type port_speed node_id node_name WWPN nportid status
0 1 1 fc 8Gb 2 node2 5005076801405F82 010E00 active
1 2 2 fc 8Gb 2 node2 5005076801305F82 010A00 active
2 3 3 fc 8Gb 2 node2 5005076801105F82 010E00 active
3 4 4 fc 8Gb 2 node2 5005076801205F82 10A00 active
4 5 3 ethernet 10Gb 2 node2 5005076801505F82 540531 active
5 6 4 ethernet 10Gb 2 node2 5005076801605F82 E80326 active
If a node goes offline or is removed from a clustered system, all volumes that are dependent on the node
go offline. Before taking a node offline or removing a node from a clustered system, run the
lsdependentvdisks command to identify any node-dependent volumes.
By default, the lsdependentvdisks command also checks all available quorum disks. If the quorum disks
are accessible only through the specified node, the command returns an error.
Various scenarios can produce node-dependent volumes. The following examples are common scenarios
in which the lsnodedependentvdisks command will return node-dependent volumes:
1. The node contains solid-state drives (SSDs) and also contains the only synchronized copy of a
mirrored volume.
2. The node is the only node that can access an MDisk on the SAN fabric.
3. The other node in the I/O group is offline (all volumes in the I/O group are returned).
4. Pinned data in the cache is stopping the partner node from joining the I/O group.
To resolve (1), allow volume mirror synchronizations between SSD MDisks to complete. To resolve (2-4),
bring any offline MDisks online and repair any degraded paths.
Note: The command lists the node-dependent volumes at the time the command is run; subsequent
changes to a clustered system require running the command again.
Procedure
1. Issue the lsdependentvdisks CLI command.
The following example shows the CLI format for listing the volumes that are dependent on node01:
lsdependentvdisks -drive -delim : 0:1
The following example shows the output that is displayed:
vdisk_id:vdisk_name
4:vdisk4
5:vdisk5
Determining the VDisk name from the device identifier on the host
You can use the command-line interface (CLI) to determine the virtual disk (VDisk) name from the device
identifier on the host.
Each VDisk that is exported by the SAN Volume Controller is assigned a unique device identifier. The
device identifier uniquely identifies the VDisk (volume) and can be used to determine which VDisk
corresponds to the volume that the host sees.
Perform the following steps to determine the VDisk name from the device identifier:
Procedure
1. Find the device identifier. For example, if you are using the subsystem device driver (SDD), the disk
identifier is referred to as the virtual path (vpath) number. You can issue the following SDD command
to find the vpath serial number:
datapath query device
For other multipathing drivers, refer to the documentation that is provided with your multipathing
driver to determine the device identifier.
2. Find the host object that is defined to the SAN Volume Controller and corresponds with the host that
you are working with.
a. Find the worldwide port numbers (WWPNs) by looking at the device definitions that are stored
by your operating system. For example, on AIX the WWPNs are in the ODM and if you use
Windows you have to go into the HBA Bios.
b. Verify which host object is defined to the SAN Volume Controller for which these ports belong.
The ports are stored as part of the detailed view, so you must list each host by issuing the
following CLI command:
lshost id | name
Where name/id is the name or ID of the host.
c. Check for matching WWPNs.
3. Issue the following command to list the VDisk-to-host mappings:
lshostvdiskmap hostname
Where hostname is the name of the host.
4. Find the VDisk UID that matches the device identifier and record the VDisk name or ID.
46 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
About this task
Perform the following steps to determine the host that the volume is mapped to:
Procedure
1. Find the volume name or ID that you want to check.
2. Issue the following CLI command to list the hosts that this volume is mapped:
lsvdiskhostmap vdiskname/id
where vdiskname/id is the name or ID of the volume.
3. Find the host name or ID to determine which host this volume is mapped to.
v If no data is returned, the volume is not mapped to any hosts.
Select one or more of the following options to determine the relationship between volumes and MDisks:
Procedure
v To display a list of the IDs that correspond to the MDisks that comprise the volume, issue the
following CLI command:
lsvdiskmember vdiskname/id
where vdiskname/id is the name or ID of the volume.
v To display a list of IDs that correspond to the volumes that are using this MDisk, issue the following
CLI command:
lsmdiskmember mdiskname/id
where mdiskname/id is the name or ID of the MDisk.
v To display a table of volume IDs and the corresponding number of extents that are being used by each
volume, issue the following CLI command:
lsmdiskextent mdiskname/id
where mdiskname/id is the name or ID of the MDisk.
v To display a table of MDisk IDs and the corresponding number of extents that each MDisk provides as
storage for the given volume, issue the following CLI command:
lsvdiskextent vdiskname/id
where vdiskname/id is the name or ID of the volume.
Each MDisk corresponds with a single RAID array, or with a single partition on a given RAID array. Each
RAID controller defines a LUN number for this disk. The LUN number and controller name or ID are
needed to determine the relationship between MDisks and RAID arrays or partitions.
Procedure
1. Issue the following command to display a detailed view of the MDisk:
lsmdisk mdiskname
Where mdiskname is the name of the MDisk for which you want to display a detailed view.
2. Record the controller name or controller ID and the controller LUN number.
3. Issue the following command to display a detailed view of the controller:
lscontroller controllername
Where controllername is the name of the controller that you recorded in step 2.
4. Record the vendor ID, product ID, and WWNN. You can use this information to determine what is
being presented to the MDisk.
5. From the native user interface for the given controller, list the LUNs it is presenting and match the
LUN number with that noted in step 1. This tells you the exact RAID array or partition that
corresponds with the MDisk.
Perform the following steps to increase the size of your clustered system:
Procedure
1. Add a node to your clustered system and repeat this step for the second node.
2. If you want to balance the load between the existing I/O groups and the new I/O groups, you can
migrate your volumes to new I/O groups. Repeat this step for all volumes that you want to assign to
the new I/O group.
Adding a node to increase the size of a clustered system using the CLI
You can use the command-line interface (CLI) to increase the size of a clustered system by adding a pair
of nodes to create a full I/O group.
48 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Before you begin
Attention: If you are adding a node that was previously removed from a clustered system (system)
ensure that these conditions have been met:
v All hosts that accessed the removed node through its worldwite port names (WWPNs) have been
reconfigured to use the WWPN for the new node or to no longer access the node. Failure to do so can
result in data corruption.
v Ensure that the system ID has been reset on the new control enclosure. This can be performed using
Either of the new control enclosure nodes using the Command-Line Interface (CLI) - visit:
chenclosurevpd on page 425
The Service Assistant system by performing these steps:
- Connect to the service assistant on either of the nodes in the control enclosure.
- Select Configure Enclosure.
- Select the Reset the system ID option. Do not make any other changes on the panel.
- Click Modify to make the changes.
Complete these steps to add a node and increase the size of a clustered system:
Procedure
1. Install the new nodes. Connect the nodes to the Fibre Channel.
2. Using the front panel of the node, record the WWNN. The front panel only shows the last 5 digits of
the WWNN.
3. Issue this command to verify that the node is detected on the fabric:
lsnodecandidate
This example shows the output for this command:
# svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
5005076801002838 104890 10004BC010 20400001124C0040 8G4
5005076801003205 106142 10004BC052 20400001124C0142 8G4
4. Verify that the last 5 digits on the WWNN that was reported by lsnodecandidate match the WWNN
that was recorded from the front panel. Record the full WWNN (id) for use in later steps.
5. Issue this command to determine the I/O group where the node should be added:
lsiogrp
6. Record the name or ID of the first I/O group that has a node count of zero (0). You will need the ID
for the next step.
Note: You only need to do this step for the first node that is added. The second node of the pair uses
the same I/O group number.
7. Issue this command to add the node to the clustered system:
addnode -wwnodename WWNN -iogrp newiogrpname/id [-name newnodename]
Where WWNN is the WWNN of the node, newiogrpname/id is the name or ID of the I/O group that
you want to add the node to and newnodename is the name that you want to assign to the node. If you
do not specify a new node name, a default name is assigned; however, it is recommended you specify
a meaningful name.
8. Record this information for future reference:
v Serial number.
v Worldwide node name.
v All of the worldwide port names.
What to do next
Add additional nodes until the I/O group contains two nodes. You may need to reconfigure your storage
systems to allow the new nodes to access them. If the storage system uses mapping to present RAID
arrays or partitions to the clustered system and the WWNNs or the worldwide port names have changed,
you must modify the port groups that belong to the clustered system.
Attention: Run the repairvdiskcopy command only if all volume copies are synchronized.
When you issue the repairvdiskcopy command, you must use only one of the -validate, -medium, or
-resync parameters. You must also specify the name or ID of the volume to be validated and repaired as
the last entry on the command line. After you issue the command, no output is displayed.
-validate
Use this parameter if you only want to verify that the mirrored volume copies are identical. If any
difference is found, the command stops and logs an error that includes the logical block address
(LBA) and the length of the first difference. You can use this parameter, starting at a different LBA
each time to count the number of differences on a volume.
-medium
Use this parameter to convert sectors on all volume copies that contain different contents into virtual
medium errors. Upon completion, the command logs an event, which indicates the number of
differences that were found, the number that were converted into medium errors, and the number
that were not converted. Use this option if you are unsure what the correct data is, and you do not
want an incorrect version of the data to be used.
-resync
Use this parameter to overwrite contents from the specified primary volume copy to the other
volume copy. The command corrects any differing sectors by copying the sectors from the primary
copy to the copies being compared. Upon completion, the command process logs an event, which
indicates the number of differences that were corrected. Use this action if you are sure that either the
primary volume copy data is correct or that your host applications can handle incorrect data.
-startlba lba
Optionally, use this parameter to specify the starting Logical Block Address (LBA) from which to start
the validation and repair. If you previously used the validate parameter, an error was logged with
the LBA where the first difference, if any, was found. Reissue repairvdiskcopy with that LBA to
avoid reprocessing the initial sectors that compared identically. Continue to reissue repairvdiskcopy
using this parameter to list all the differences.
Issue the following command to validate and, if necessary, automatically repair mirrored copies of the
specified volume:
repairvdiskcopy -resync -startlba 20 vdisk8
Notes:
1. Only one repairvdiskcopy command can run on a volume at a time.
2. Once you start the repairvdiskcopy command, you cannot use the command to stop processing.
50 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
3. The primary copy of a mirrored volume cannot be changed while the repairvdiskcopy -resync
command is running.
4. If there is only one mirrored copy, the command returns immediately with an error.
5. If a copy being compared goes offline, the command is halted with an error. The command is not
automatically resumed when the copy is brought back online.
6. In the case where one copy is readable but the other copy has a medium error, the command process
automatically attempts to fix the medium error by writing the read data from the other copy.
7. If no differing sectors are found during repairvdiskcopy processing, an informational error is logged
at the end of the process.
Checking the progress of validation and repair of volume copies using the CLI
Use the lsrepairvdiskcopyprogress command to display the progress of mirrored volume validation and
repairs. You can specify a volume copy using the -copy id parameter. To display the volumes that have
two or more copies with an active task, specify the command with no parameters; it is not possible to
have only one volume copy with an active task.
To check the progress of validation and repair of mirrored volumes, issue the following command:
lsrepairvdiskcopyprogress delim :
The repairsevdiskcopy command automatically detects and repairs corrupted metadata. The command
holds the volume offline during the repair, but does not prevent the disk from being moved between I/O
groups.
If a repair operation completes successfully and the volume was previously offline because of corrupted
metadata, the command brings the volume back online. The only limit on the number of concurrent
repair operations is the number of virtual disk copies in the configuration.
When you issue the repairsevdiskcopy command, you must specify the name or ID of the volume to be
repaired as the last entry on the command line. Once started, a repair operation cannot be paused or
cancelled; the repair can only be terminated by deleting the copy.
Attention: Use this command only to repair a space-efficient volume (thin-provisioned volume) that has
reported corrupt metadata.
Notes:
1. Because the volume is offline to the host, any I/O that is submitted to the volume while it is being
repaired fails.
Checking the progress of the repair of a space-efficient volume using the CLI
Issue the lsrepairsevdiskcopyprogress command to list the repair progress for space-efficient volume
copies of the specified volume. If you do not specify a volume, the command lists the repair progress for
all space-efficient copies in the system.
Note: Only run this command after you run the repairsevdiskcopy command, which you must only run
as required by the fix procedures or by IBM support.
If you have lost both nodes in an I/O group and have, therefore, lost access to all the volumes that are
associated with the I/O group, you must perform one of the following procedures to regain access to
your volumes. Depending on the failure type, you might have lost data that was cached for these
volumes and the volumes are now offline.
One node in an I/O group has failed and failover has started on the second node. During the failover
process, the second node in the I/O group fails before the data in the write cache is written to hard disk.
The first node is successfully repaired but its hardened data is not the most recent version that is
committed to the data store; therefore, it cannot be used. The second node is repaired or replaced and has
lost its hardened data, therefore, the node has no way of recognizing that it is part of the clustered
system.
Perform the following steps to recover from an offline volume when one node has down-level hardened
data and the other node has lost hardened data:
Procedure
1. Recover the node and add it back into the system.
2. Delete all IBM FlashCopy mappings and Metro Mirror or Global Mirror relationships that use the
offline volumes.
3. Run the recovervdisk, recovervdiskbyiogrp or recovervdiskbysystem command.
4. Re-create all FlashCopy mappings and Metro Mirror or Global Mirror relationships that use the
volumes.
Example
Both nodes in the I/O group have failed and have been repaired. The nodes have lost their hardened
data, therefore, the nodes have no way of recognizing that they are part of the system.
Perform the following steps to recover from an offline volume when both nodes have lost their hardened
data and cannot be recognized by the system:
1. Delete all FlashCopy mappings and Metro Mirror or Global Mirror relationships that use the offline
volumes.
52 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
2. Run the recovervdisk, recovervdiskbyiogrp or recovervdiskbysystem command.
3. Create all FlashCopy mappings and Metro Mirror or Global Mirror relationships that use the volumes.
Perform the following steps to recover a node and return it to the clustered system:
Procedure
1. Run the lsnode command (for Storwize V7000 nodes) to verify that the node is offline.
2. Run the rmnode Nodename/ID command to remove the old instance of the offline node from the
clustered system.
3. Run the lsnodecandidate command to verify that the node is visible on the fabric.
4. Run the addnode -wwnodename WWNN -iogrp IOgroupname/ID -name NodeName to add the node back
into the clustered system, where WWNN is the worldwide node name, IOgroupname/ID is the I/O
group name or ID, and NodeName is the name of the node.
Note: In a service situation, a node should normally be added back into a clustered system using the
original node name. If the partner node in the I/O group has not also been deleted, this is the default
name that is used if the -name parameter is not specified.
5. Run the lsnode command to verify that the node is online.
Procedure
1. Issue the following CLI command to list all volumes that are offline and belong to an I/O group,
enter:
lsvdisk -filtervalue IO_group_name=
IOGRPNAME/ID:status=offline
where IOGRPNAME/ID is the name of the I/O group that failed.
2. To acknowledge data loss for a volume with a fast_write_state of corrupt and bring the volume back
online, enter:
recovervdisk vdisk_id | vdisk_name
where vdisk_id | vdisk_name is the name or ID of the volume.
Notes:
v If the specified volume is space-efficient or has space-efficient copies, the recovervdisk command
starts the space-efficient repair process.
v If the specified volume is mirrored, the recovervdisk command starts the resynchronization process.
3. To acknowledge data loss for all virtual disks in an I/O group with a fast_write_state of corrupt and
bring them back online, enter:
recovervdiskbyiogrp io_group_id | io_group_name
Notes:
v If any volume is space-efficient or has space-efficient copies, the recovervdiskbyiogrp command
starts the space-efficient repair process.
v If any volume is mirrored, the recovervdiskbyiogrp command starts the resynchronization process.
4. To acknowledge data loss for all volumes in the clustered system with a fast_write_state of corrupt and
bring them back online, enter:
recovervdiskbycluster
Notes:
v If any volume is space-efficient or has space-efficient copies, the recovervdiskbycluster command
starts the space-efficient repair process.
v If any volume is mirrored, the recovervdiskbycluster command starts the resynchronization
process.
Moving offline volumes to their original I/O group using the CLI
You can move offline volumes to their original I/O group using the command-line interface (CLI).
After a node or an I/O group fails, you can use the following procedure to move offline volumes to their
original I/O group.
Attention: Do not move volumes to an offline I/O group. Ensure that the I/O group is online before
you move the volume back to avoid any further data loss.
Perform the following steps to move offline volumes to their original I/O group:
Procedure
1. Issue the following command to move the volume back into the original I/O group:
chvdisk -iogrp IOGRPNAME/ID -force
vdiskname/ID
where IOGRPNAME/ID is the name or ID of the original I/O group and vdiskname/ID is the name or
ID of the offline volume.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
2. Issue the following command to verify that the volumes are now online:
lsvdisk -filtervalue IO_group_name=
IOGRPNAME/ID
where IOGRPNAME/ID is the name or ID of the original I/O group.
Because it is sometimes necessary to replace the host-bus adapter (HBA) that connects the host to the
SAN, you must inform the SAN Volume Controller of the new worldwide port names (WWPNs) that this
HBA contains.
54 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Ensure that your switch is zoned correctly.
Perform the following steps to inform the SAN Volume Controller of a change to a defined host object:
Procedure
1. Issue the following CLI command to list the candidate HBA ports:
lshbaportcandidate
You should see a list of the HBA ports that are available for addition to host objects. One or more of
these HBA ports should correspond with the one or more WWPNs that belong to the new HBA port.
2. Locate the host object that corresponds with the host in which you have replaced the HBA. The
following CLI command lists all the defined host objects:
lshost
3. Issue the following CLI command to list the WWPNs that are currently assigned to the host object:
lshost hostobjectname
where hostobjectname is the name of the host object.
4. Issue the following CLI command to add the new ports to the existing host object:
addhostport -hbawwpn one or more existing WWPNs
separated by : hostobjectname/ID
where one or more existing WWPNs separated by : is the WWPNs that are currently assigned to the host
object and hostobjectname/ID is the name or ID of the host object.
5. Issue the following CLI command to remove the old ports from the host object:
rmhostport -hbawwpn one or more existing WWPNs
separated by : hostobjectname/ID
where one or more existing WWPNs separated by : is the WWPNs that are currently assigned to the host
object and hostobjectname/ID is the name or ID of the host object.
Results
Any mappings that exist between the host object and the virtual disks (VDisks) are automatically applied
to the new WWPNs. Therefore, the host sees the VDisks as the same SCSI LUNs as before.
What to do next
See the IBM System Storage Multipath Subsystem Device Driver User's Guide or the documentation that is
provided with your multipathing driver for additional information about dynamic reconfiguration.
VDisks (volumes) that are mapped for FlashCopy or that are in Metro Mirror relationships cannot be
expanded.
Ensure that you have run Windows Update and have applied all recommended updates to your system
before you attempt to expand a volume that is mapped to a Windows host.
Determine the exact size of the source or master volume by issuing the following CLI command:
lsvdisk -bytes vdiskname
where vdiskname is the name of the volume for which you want to determine the exact size.
A volume that is not mapped to any hosts and does not contain any data can be expanded at any time. If
the volume contains data that is in use, you can expand the volumes if your host has a supported AIX or
Microsoft Windows operating system.
The following table provides the supported operating systems and requirements for expanding volumes
that contain data:
The chvg command options provide the ability to expand the size of a physical volume that the Logical
Volume Manager (LVM) uses, without interruptions to the use or availability of the system. See the AIX
System Management Guide Operating System and Devices for more information.
Perform the following steps to expand a volume that is mapped to a Windows host:
Procedure
1. Issue the following CLI command to expand the volume:
expandvdisksize -size disk_size -unit
b | kb | mb | gb | tb | pb vdisk_name/vdisk_id
where disk_size is the capacity by which you want to expand the volume, b | kb | mb | gb | tb | pb is
the data unit to use in conjunction with the capacity and vdisk_name/vdisk_id is the name of the
volume or the ID of the volume to expand.
2. On the Windows host, start the Computer Management application and open the Disk Management
window under the Storage branch.
56 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Results
You will see the volume that you expanded now has some unallocated space at the end of the disk.
You can expand dynamic disks without stopping I/O operations in most cases. However, in some
applications the operating system might report I/O errors. When this problem occurs, either of the
following entries might be recorded in the System event log:
Event Type: Information
Event Source: dmio
Event Category: None
Event ID: 31
Description: dmio:
Harddisk0 write error at block ######## due to
disk removal
Attention: This is a known problem with Windows 2000 and is documented in the Microsoft knowledge
base as article Q327020. If either of these errors are seen, run Windows Update and apply the
recommended fixes to resolve the problem.
What to do next
If the Computer Management application was open before you expanded the volume, use the Computer
Management application to issue a rescan command.
If the disk is a Windows basic disk, you can create a new primary or extended partition from the
unallocated space.
If the disk is a Windows dynamic disk, you can use the unallocated space to create a new volume
(simple, striped, mirrored) or add it to an existing volume.
Volumes can be reduced in size, if it is necessary. You can make a target or auxiliary volume the same
size as the source or master volume when you create FlashCopy mappings, Metro Mirror relationships, or
Global Mirror relationships. However, if the volume contains data, do not shrink the size of the disk.
You can use the shrinkvdisksize command to shrink the physical capacity that is allocated to the
particular volume by the specified amount. You can also shrink the virtual capacity of a thin-provisioned
volume without altering the physical capacity assigned to the volume.
For more information about the command parameters, see the IBM System Storage SAN Volume Controller
and IBM Storwize V7000 Command-Line Interface User's Guide.
Procedure
The SAN Volume Controller provides various data migration features. These can be used to move the
placement of data both within storage pools and between storage pools. These features can be used
concurrently with I/O operations. You can use either of these methods to migrate data:
1. Migrating data (extents) from one MDisk to another (within the same storage pool). This can be used
to remove highly utilized MDisks.
2. Migrating volumes from one storage pool to another. This can be used to remove highly utilized
storage pools. For example, you can reduce the utilization of a group of MDisks.
Migration commands fail if the target or source volume is offline, or if there is insufficient quorum disk
space to store the metadata. Correct the offline or quorum disk condition and reissue the command.
You can determine the usage of particular MDisks by gathering input/output (I/O) statistics about nodes,
MDisks, and volumes. After you have gathered this data, you can analyze it to determine which MDisks
are highly utilized. The procedure then takes you through querying and migrating extents to elsewhere in
the same storage pool. This procedure can only be performed using the command-line tools.
If performance monitoring tools, such as IBM Tivoli Storage Productivity Center, indicate that a managed
disk in the pool is being overutilized, you can migrate some of the data onto other MDisks within the
same storage pool.
58 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Procedure
1. Determine the number of extents that are in use by each volume for the given MDisk by issuing this
CLI command:
lsmdiskextent mdiskname
This command returns the number of extents that each volume is using on the given MDisk. You
should pick some of these to migrate elsewhere in the group.
2. Determine the other MDisks that reside in the same storage pool.
a. To determine the storage pool that the MDisk belongs to, issue this CLI command:
lsmdisk mdiskname | ID
b. List the MDisks in the group by issuing this CLI command:
You can issue the lsmdiskextent newmdiskname command for each of the target MDisks to ensure that
you are not just moving the over-utilization to another MDisk. Check that the volume that owns the
set of extents to be moved does not already own a large set of extents on the target MDisk.
4. For each set of extents, issue this CLI command to move them to another MDisk:
where num_extents is the number of extents on the vdiskid. The newmdiskname | ID value is the name
or ID of the MDisk to migrate this set of extents to.
Note: The number of threads indicates the priority of the migration processing, where 1 is the lowest
priority and 4 is the highest priority.
5. Repeat the previous steps for each set of extents that you are moving.
6. You can check the progress of the migration by issuing this CLI command:
lsmigrate
You can determine the usage of particular MDisks by gathering input/output (I/O) statistics about nodes,
MDisks, and volumes. After you have gathered this data, you can analyze it to determine which volumes
or MDisks are hot. You can then migrate volumes from one storage pool to another.
Perform the following step to gather statistics about MDisks and volumes:
1. Use secure copy (scp command) to retrieve the dump files for analyzing. For example, issue the
following:
scp clusterip:/dumps/iostats/v_*
This copies all the volume statistics files to the AIX host in the current directory.
After you analyze the I/O statistics data, you can determine which volumes are hot. You also need to
determine the storage pool that you want to move this volume to. Either create a new storage pool or
determine an existing group that is not yet overly used. To do this, check the I/O statistics files that you
generated and then ensure that the MDisks or VDisks in the target storage pool are used less than those
in the source group.
You can use data migration or volume mirroring to migrate data between MDisk groups. Data migration
uses the command migratevdisk. Volume mirroring uses the commands addvdiskcopy and rmvdiskcopy.
When you issue the migratevdisk command, a check is made to ensure that the destination of the
migration has enough free extents to satisfy the command. If it does, the command proceeds. The
command takes some time to complete.
Notes:
v You cannot use the SAN Volume Controller data migration function to move a volume between storage
pools that have different extent sizes.
v Migration commands fail if the target or source volume is offline, or if there is insufficient quorum disk
space to store the metadata. Correct the offline or quorum disk condition and reissue the command.
When you use data migration, it is possible for the free destination extents to be consumed by another
process; for example, if a new volume is created in the destination storage pool or if more migration
commands are started. In this scenario, after all the destination extents are allocated, the migration
commands suspend and an error is logged (error ID 020005). To recover from this situation, use either of
the following methods:
v Add additional MDisks to the target storage pool. This provides additional extents in the group and
allows the migrations to be restarted. You must mark the error as fixed before you reattempt the
migration.
v Migrate one or more VDisks that are already created from the storage pool to another group. This frees
up extents in the group and allows the original migrations to be restarted.
Perform the following steps to use the migratevdisk command to migrate volumes between storage
pools:
Procedure
1. After you determine the volume that you want to migrate and the new storage pool you want to
migrate it to, issue the following CLI command:
migratevdisk -vdisk vdiskname/ID -mdiskgrp
newmdiskgrname/ID -threads 4
2. You can check the progress of the migration by issuing the following CLI command:
lsmigrate
60 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
What to do next
When you use data migration, the volume goes offline if either storage pool fails. Volume mirroring can
be used to minimize the impact to the volume because the volume goes offline only if the source storage
pool fails.
Perform the following steps to use volume mirroring to migrate volumes between storage pool:
1. After you determine the volume that you want to migrate and the new storage pool that you want to
migrate it to, issue the following command:
addvdiskcopy -mdiskgrp newmdiskgrname/ID vdiskname/ID
2. The copy ID of the new copy is returned. The copies now synchronize such that the data is stored in
both storage pools. You can check the progress of the synchronization by issuing the following
command:
lsvdisksyncprogress
3. After the synchronization is complete, remove the copy from the original I/O group to free up extents
and decrease the utilization of the storage pool. To remove the original copy, issue the following
command:
rmvdiskcopy -copy original copy id vdiskname/ID
Attention: These migration tasks can be non-disruptive if performed correctly and hosts mapped to the
volume support non disruptive volume move The cached data that is held within the clustered system
(system) must first be written to disk before the allocation of the volume can be changed.
Modifying the I/O group that services the volume can be done concurrently with I/O operations if the
host supports non disruptive volume move. It also requires a rescan at the host level to ensure that the
multipathing driver is notified that the allocation of the preferred node has changed and the ports by
which the volume is accessed has changed. This can be done in the situation where one pair of nodes has
become over utilized.
If there are any host mappings for the volume, the hosts must be members of the target I/O group or the
migration fails.
Make sure you create paths to I/O groups on the host system. After the system has successfully added
the new I/O group to the volume's access set and you have moved selected volumes to another I/O
group, detect the new paths to the volumes on the host. The commands and actions on the host vary
depending on the type of host and the connection method used. These steps must be completed on all
hosts to which the selected volumes are currently mapped.
Procedure
1. Issue the following command: addvdiskaccess -iogrp iogrp id/name volume id/name
2. Issue the following command: movevdisk -iogrp destination iogrp -node new preferred node
volume id/name
3. Issue the appropriate commands on the hosts mapped to the volume to detect the new paths to the
volume in the destination I/O group.
4. Once you confirm the new paths are online, remove access from the old I/O group: rmvdiskaccess
-iogrp iogrp id/name volume id/name
Make sure you are aware of the following before you create image mode volumes:
1. Unmanaged-mode managed disks (MDisks) that contain existing data cannot be differentiated from
unmanaged-mode MDisks that are blank. Therefore, it is vital that you control the introduction of
these MDisks to the clustered system by adding these disks one at a time. For example, map a single
LUN from your RAID storage system to the clustered system and refresh the view of MDisks. The
newly detected MDisk is displayed.
2. Do not manually add an unmanaged-mode MDisk that contains existing data to a storage pool. If you
do, the data is lost. When you use the command to convert an image mode volume from an
unmanaged-mode disk, you will select the storage pool where it should be added.
See the following website for more information:
www.ibm.com/storage/support/2145
Procedure
1. Stop all I/O operations from the hosts. Unmap the logical disks that contain the data from the hosts.
2. Create one or more storage pools.
3. Map a single array or logical unit from your RAID storage system to the clustered system. You can do
this through a switch zoning or a RAID storage system based on your host mappings. The array or
logical unit appears as an unmanaged-mode MDisk to the SAN Volume Controller.
4. Issue the lsmdisk command to list the unmanaged-mode MDisks.
If the new unmanaged-mode MDisk is not listed, you can perform a fabric-level discovery. Issue the
detectmdisk command to scan the Fibre Channel network for the unmanaged-mode MDisks.
Note: The detectmdisk command also rebalances MDisk access across the available storage system
device ports.
5. Convert the unmanaged-mode MDisk to an image mode virtual disk.
Note: If the volume that you are converting maps to a solid-state drive (SSD), the data that is stored
on the volume is not protected against SSD failures or node failures. To avoid data loss, add a volume
copy that maps to an SSD on another node.
Issue the mkvdisk command to create an image mode virtual disk object.
6. Map the new volume to the hosts that were previously using the data that the MDisk now contains.
You can use the mkvdiskhostmap command to create a new mapping between a volume and a host.
This makes the image mode volume accessible for I/O operations to the host.
Results
After the volume is mapped to a host object, the volume is detected as a disk drive with which the host
can perform I/O operations.
62 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
What to do next
If you want to virtualize the storage on an image mode volume, you can transform it into a striped
volume. Migrate the data on the image mode volume to managed-mode disks in another storage pool.
Issue the migratevdisk command to migrate an entire image mode volume from one storage pool to
another storage pool.
The migratetoimage CLI command allows you to migrate the data from an existing VDisk (volume) onto
a different managed disk (MDisk).
When the migratetoimage CLI command is issued, it migrates the data of the user specified source VDisk
onto the specified target MDisk. When the command completes, the VDisk is classified as an image mode
VDisk.
Note: Migration commands fail if the target or source VDisk is offline, or if there is insufficient quorum
disk space to store the metadata. Correct the offline or quorum disk condition and reissue the command.
The MDisk specified as the target must be in an unmanaged state at the time the command is run.
Issuing this command results in the inclusion of the MDisk into the user specified MDisk group.
Issue the following CLI command to migrate data to an image mode VDisk:
where vdiskname/ID is the name or ID of the VDisk, newmdiskname/ID is the name or ID of the new
MDisk, and newmdiskgrpname/ID is the name or ID of the new MDisk group (storage pool).
After the node is deleted, the other node in the I/O group enters write-through mode until another node
is added back into the I/O group.
By default, the rmnode command flushes the cache on the specified node before taking the node offline.
When operating in a degraded state, the SAN Volume Controller ensures that data loss does not occur as
a result of deleting the only node with the cache data.
Procedure
1. If you are deleting the last node in an I/O group, determine the volumes that are still assigned to this
I/O group:
a. Issue this CLI command to request a filtered view of the volumes:
lsvdisk -filtervalue IO_group_name=name
Note: If volumes are assigned to this I/O group that contain data that you want to continue to access,
back up the data or migrate the volumes to a different (online) I/O group.
2. If this node is not the last node in the clustered system, turn off the power to the node that you
intend to remove. This step ensures that the multipathing device driver, such as the subsystem device
driver (SDD), does not rediscover the paths that are manually removed before you issue the delete
node request.
64 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Attention:
a. If you are removing the configuration node, the rmnode command causes the configuration node to
move to a different node within the clustered system. This process might take a short time,
typically less than a minute. The clustered system IP address remains unchanged, but any SSH
client attached to the configuration node might must reestablish a connection.
b. If you turn on the power to the node that has been removed and it is still connected to the same
fabric or zone, it attempts to rejoin the clustered system. The clustered system causes the node to
remove itself from the clustered system and the node becomes a candidate for addition to this
clustered system or another clustered system.
c. If you are adding this node into the clustered system, ensure that you add it to the same I/O
group that it was previously a member of. Failure to do so can result in data corruption.
d. In a service situation, a node should normally be added back into a clustered system using the
original node name. As long as the partner node in the I/O group has not been deleted too, this is
the default name used if -name is not specified.
3. Before you delete the node, update the multipathing device driver configuration on the host to
remove all device identifiers that are presented by the volumes that you intend to remove. If you are
using the subsystem device driver, the device identifiers are referred to as virtual paths (vpaths).
Attention: Failure to perform this step can result in data corruption.
See the IBM System Storage Multipath Subsystem Device Driver User's Guide for details about how to
dynamically reconfigure SDD for the given host operating system.
4. Issue this CLI command to delete a node from the clustered system:
Attention: Before you delete the node: The rmnode command checks for node-dependent volumes,
which are not mirrored at the time that the command is run. If any node-dependent volumes are
found, the command stops and returns a message. To continue removing the node despite the
potential loss of data, run the rmnode command with the force parameter. Alternatively, follow these
steps before you remove the node to ensure that all volumes are mirrored:
a. Run the lsdependentvdisks command.
b. For each node-dependent volume that is returned, run the lsvdisk command.
c. Ensure that each volume returns in-sync status.
rmnode node_name_or_identification
Where node_name_or_identification is the name or identification of the node.
Procedure
1. Issue the finderr command to analyze the error log for the highest severity of unfixed errors. This
command scans the error log for any unfixed errors. Given a priority ordering defined within the
code, the highest priority of unfixed errors is returned.
2. Issue the dumperrlog command to dump the contents of the error log to a text file.
3. Locate and fix the error.
4. Issue the clearerrlog command to clear all entries from the error log, including status events and any
unfixed errors. Only issue this command when you have either rebuilt the clustered system or have
fixed a major problem that has caused many entries in the error log that you do not want to fix
individually.
Attention: When you specify a new IP address for a system, the existing communication with the
system is broken. You must reconnect to the system with the new IP address.
Procedure
1. Issue the lssystemip command to list the current IP addresses that are used by the system.
2. Record the current IP addresses for future reference.
3. To change an Internet Protocol Version 4 (IPv4) system IP address, issue this command:
chsystemip -clusterip cluster_ip_address -port cluster_port
where cluster_ip_address is the new IP address for the cluster and cluster_port specifies which port (1
or 2) to apply changes to.
4. To change an IPv4 system IP address to an IPv6 system IP address, issue this command:
chsystemip -clusterip_6 cluster_ip_address -port cluster_port
where cluster_ip_address is the new Internet Protocol Version 6 (IPv6) address for the cluster and
cluster_port specifies which port (1 or 2) to apply changes to.
5. To change an IPv4 default gateway IP address, issue this command:
chsystemip -gw cluster_gateway_address -port cluster_port
where cluster_gateway_address is the new gateway address for the cluster and cluster_port specifies
which port (1 or 2) to apply changes to.
6. To change an IPv6 default gateway address, issue this command:
chsystemip -gw_6 cluster_gateway_address -port cluster_port
where cluster_gateway_address is the new gateway address for the cluster and cluster_port specifies
which port (1 or 2) to apply changes to.
7. Issue this command to change an IPv4 system subnet mask
chsystemip -mask cluster_subnet_mask -port cluster_port
where cluster_subnet_mask is the new subnet mask for the cluster and cluster_port specifies which port
(1 or 2) to apply changes to.
8. For IPv6 addresses, you can issue this command to set the prefix for the system:
chsystemip -prefix_6 -port cluster_port
where cluster_port specifies which port (1 or 2) to apply changes to.
9. Optionally, if you want to delete all of the IPv4 addresses in the system after you have changed all
addresses to IPv6, issue this command:
chsystem -noip
10. Optionally, if you want to delete all of the IPv6 addresses in the system after you have changed all
addresses to IPv4, issue this command:
chsystem -noip_6
11. The IP routing table provides details of the gateway that is used for IP traffic to a range of IP
addresses for each Ethernet port. This information can be used to diagnose configuration node
accessibility problems. To display the IP routing table, enter this CLI command:
66 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
lsroute
12. The ping command can be used to diagnose IP configuration problems by checking whether a given
IP address is accessible from the configuration node. The command can be useful for diagnosing
problems where the configuration node cannot be reached from a specific management server. For
example, enter this CLI command:
ping ipv4_address | ipv6_address
where ipv4_address | ipv6_address is either the IPv4 address or the IPv6 address.
Procedure
1. Issue the lssystemip command to list the current gateway address of the system.
2. Record the current gateway address for future reference.
3. Issue the following command to change an IPv4 clustered system gateway address:
chsystemip -gw cluster_gateway_address -port cluster_port
where cluster_gateway_address is the new gateway address for the system. The port parameter specifies
which port (1 or 2) to apply changes to.
4. Issue the following command to change an IPv6 system gateway address:
chsystemip -gw_6 cluster_gateway_address -port cluster_port
where cluster_gateway_address is the new gateway address for the system. The port parameter specifies
which port (1 or 2) to apply changes to.
The relationship bandwidth limit controls the maximum rate at which any one remote-copy relationship
can synchronize. The overall limit is controlled by the bandwidth parameter of each clustered system
partnership. The default value for the relationship bandwidth limit is 25 megabytes per second (MBps),
but you can change this by following these steps:
Procedure
1. Issue the lssystem command to list the current relationship bandwidth limit of the system. For
example:
lssystem system_id_or_system_name
Before completing any iSCSI-configuration tasks on the system, it is important that you complete all the
iSCSI-related configuration on the host machine. Because the SAN Volume Controller supports a variety
of host machines, consult the documentation for specific instructions and requirements for a particular
host. For a list of supported hosts, see this website:
www.ibm.com/storage/support/2145
To configure a system for iSCSI, follow these general tasks on the host system:
1. Select a software-based iSCSI initiator, such as Microsoft Windows iSCSI Software Initiator and verify
the iSCSI driver installation.
2. If required, install and configure a multipathing driver for the host system.
In addition, determine a naming convention for iSCSI names, such as iSCSI qualified names (IQNs) for
your system. Hosts use iSCSI names to connect to the node. Each node, for example, has a unique IQN,
and the system name and node name are used as part of that IQN. Each node, for example, has a unique
IQN, and the system name and node name are used as part of that IQN.
Port IP addresses are the IP addresses that are used by iSCSI-attached hosts to perform I/O.
Procedure
1. To configure a new port IP address to a specified Ethernet port of a node with an IPv4 address, enter
the following command-line interface (CLI) command:
cfgportip -node node_name | node_id -ip ipv4addr
-gw ipv4gw -mask subnet_mask -failover port_id
where node_name | node_id specifies the name or ID of the node that is being configured, ipv4addr is
the IPv4 address for the Ethernet port, ipv4gw is the IPv4 gateway IP address, subnet_mask is the IPv4
subnet mask, and port_id specifies the Ethernet port ID (1 or 2). To view a list of ports, use the
lsportip command.
The optional -failover parameter specifies that the port is to be used during failover. If the node that
is specified is the only online node in the I/O group, the address is configured and presented by this
node. When another node in the I/O group comes online, the failover address is presented by that
node. If two nodes in the I/O group are online when the command is issued, the address is presented
by the other node to the partner node.
2. To configure a new port IP address that belongs to a partner node with an IPv6 address in the I/O
group, enter the following CLI command:
cfgportip -node node_name | node_id -ip_6 ipv6addr
-gw_6 ipv6gw -prefix_6 prefix -failover port_id
68 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
where node_name | node_id specifies the name or ID of the node that is being configured, ipv4addr is
the IPv4 address for the Ethernet port, ipv4gw is the IPv4 gateway IP address, subnet_mask is the IPv4
subnet mask, and port_id specifies the Ethernet port ID (1 or 2). To view a list of ports, use the
lsportip command.
The optional -failover parameter specifies that the data is failover data, which is data that is related
to the partner node. If the node that is specified is the only online node in the I/O group, the address
is configured and presented by this node. When another node in the I/O group comes online, the
failover address is presented by that node. If two nodes in the I/O group are online when the
command is issued, the address is presented by the other node to that specified.
3. To remove an iSCSI IP address from a node Ethernet port, enter either of these CLI commands. The
following command deletes an IPv4 configuration for the specified iSCSI Ethernet port:
rmportip -failover
-node node_name | node_id port_id
where node_name | node_id specifies the name or ID of the node with the Ethernet port that the IP
address is being removed from and port_id specifies the Ethernet port ID. To list the valid values for
the Ethernet port, enter the lsportip command. The optional -failover parameter indicates that the
specified data is failover data.
The following command deletes an IPv6 configuration for the specified iSCSI Ethernet port:
rmportip -ip_6 -failover
-node node_name | node_id port_id
where -ip_6 indicates that this command will remove an IPv6 configuration, node_name | node_id
specifies the name or ID of the node with the Ethernet port that the IP address is being removed
from, and port_id specifies the Ethernet port ID. To list the valid values for the Ethernet port, enter the
lsportip command. The optional -failover parameter indicates that the specified data is failover
data.
What to do next
After you configure your IP addresses, you can optionally create iSCSI aliases.
Procedure
1. To configure a new port IP address to a specified Ethernet port of a node, enter the following CLI
command:
chnode -iscsialias alias node_name | node_id
where alias node_name | node_id specifies the name or ID of the node.
2. To specify that the name or iSCSI alias that is being set is the name or alias of the partner node in the
I/O group, enter the following CLI command. When there is no partner node, the values set are
applied to the partner node when it is added to the clustered system. If this parameter is used when
there is a partner node, the name or alias of that node changes
chnode -iscsialias alias -failover node_name | node_id
where alias specifies the iSCSI name of the node and node_name | node_id specifies the node to be
modified.
After you create iSCSI aliases, you can optionally configure the address for the Internet Storage Name
Service (iSNS) server for the system.
Procedure
1. To specify an IPv4 address for the iSCSI storage name service (SNS), enter the following CLI
command:
chsystem -isnsip sns_server_address
where sns_server_address is the IP address of the iSCSI storage name service in IPv4 format.
2. To specify an IPv6 address for the iSCSI storage name service (SNS), enter the following CLI
command:
chsystem -isnsip_6 ipv6_sns_server_address
where ipv6_sns_server_address is the IP address of the iSCSI storage name service in IPv6 format.
What to do next
To configure authentication between the SAN Volume Controller clustered system and the iSCSI-attached
hosts, follow these steps:
Procedure
1. To set the authentication method for the iSCSI communications of the clustered system , enter the
following CLI command:
chsystem -iscsiauthmethod chap -chapsecret chap_secret
where chap sets the authentication method for the iSCSI communications of the clustered system and
chap_secret sets the CHAP secret to be used to authenticate the clustered system via iSCSI. This
parameter is required if the iscsiauthmethod chap parameter is specified. The specified CHAP secret
cannot begin or end with a space.
2. To clear any previously set CHAP secret for iSCSI authentication, enter the following CLI command:
chsystem -nochapsecret
70 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
3. The lsiscsiauth command lists the Challenge Handshake Authentication Protocol (CHAP) secret that
is configured for authenticating an entity to the SAN Volume Controller clustered system. The
command also displays the configured iSCSI authentication method. For example, enter the following
CLI command:
lsiscsiauth
What to do next
After you configure the CHAP secret for the SAN Volume Controller clustered system, ensure that the
clustered system CHAP secret is added to each iSCSI-attached host. On all iSCSI-attached hosts, specify a
CHAP secret that the hosts use to authenticate to the SAN Volume Controller clustered system.
If a user is configured on the clustered system as a local user, only local credentials are used. Otherwise,
when using the management GUI or the command-line interface (CLI), users entering their password are
authenticated against the remote service, and their roles are determined according to group memberships
defined on the remote service. If a user is configured on the clustered system as a remote user with an
SSH key, the user can additionally access the command-line interface using this Secure Shell (SSH) key.
Group memberships continue to be determined from the remote service.
To use the SAN Volume Controller with TIP, follow these steps:
Procedure
1. Configure the system with the location of the remote authentication server. Issue the chauthservice
command to change system settings, and issue the lssystem command to view system settings.
Remember: You can use either an http or https connection to the server. If you use http, the user,
password, and SSH key information is transmitted as clear text over the IP network.
2. Configure user groups (with roles) on the system by matching those that are used by the
authentication service. For each group of interest known to the authentication service, a SAN Volume
Controller user group must be created with the same name and with the remote setting enabled. For
example, if members of a group called sysadmins require the SAN Volume Controller Administrator
(Administrator) role, issue the following command:
mkusergrp -name sysadmins -remote -role Administrator
If none of the groups for a user match any of the SAN Volume Controller user groups, the user
cannot access the system.
3. Configure users who do not require Secure Shell (SSH) access. SAN Volume Controller users use the
remote authentication service and do not require SSH access should be deleted from the system.
Remember: A superuser cannot be deleted and cannot use the remote authentication service.
Important: Use the same Network Time Protocol (NTP) server for both systems.
Tip: A superuser cannot be authenticated using a remote Lightweight Directory Access Protocol (LDAP
server). However, other users can authenticate in this manner.
Procedure
72 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
For each group of interest known to the authentication service, a SAN Volume Controller user group
must be created with the same name and with the remote setting enabled. For example, if members of
a group called sysadmins require the SAN Volume Controller Administrator (admin) role, issue the
following command:
mkusergrp -name sysadmins -remote -role Administrator
If none of the user groups match a SAN Volume Controller user group, the user cannot access the
system.
4. Verify your LDAP configuration using the testldapserver command.
To test the connection to the LDAP servers, issue the command without any options. A username can
be supplied with or without a password to test for configuration errors. To perform a full
authentication attempt against each server, issue the following commands:
testldapserver -username username -password password
5. Issue the following command to enable LDAP authentication:
chauthservice -type ldap -enable yes
6. Configure users who do not require Secure Shell (SSH) key access.
SAN Volume Controller users who must use the remote authentication service and do not require SSH
key access should be deleted from the system.
Roles apply to both local and remote users on the system and are based on the user group to which the
user belongs. A local user can only belong to a single group; therefore, the role of a local user is defined
by the single group that the user belongs to. Remote users can belong to one or more groups; therefore,
the roles of remote users are assigned according to the groups that the remote user belongs to.
Procedure
1. Issue the mkusergrp CLI command to create a new user group. For example:
mkusergrp -name group_name -role role_name -remote
where group_name specifies the name of the user group and role_name specifies the role that is
associated with any users that belong to this group. The remote parameter specifies that the group is
visible to the remote authentication service.
The command returns the ID of the user group that was created. To create user groups in the
management GUI, select Access > Users. From the Global Actions menu, select New User Group.
2. Issue the chusergrp CLI command to change attributes of an existing user group. For example:
chusergrp -role role_name -remote yes | no group_id_or_name
where role_name specifies the role that is associated with any users that belong to this group and
group_id_or_name specifies the group to be changed. The remote parameter specifies whether the
group is visible to the authentication server.
where group_id_or_name specifies the group to delete. The force parameter specifies to delete the
group even if there are users in the user group. All users that were assigned to this group are
assigned to the Monitor group.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
To delete a user group in the management GUI, select Access > Users. Select a user group and select
Delete from the Actions menu.
4. Issue the lsusergrp CLI command to display the user groups that have been created on the system.
For example:
lsusergrp usergrp_id_or_name
where group_id_or_name specifies the user group to view. If you do not specify a user group ID or
name, all user groups on the system are displayed.
Local users must provide either a password, a Secure Shell (SSH) key, or both. Local users are
authenticated through the authentication methods that are located on the Storwize V7000 or SAN Volume
Controller system.
You can create two categories of users that access the clustered system (system). These user types are
based on how they authenticate to the system:
v Local users must provide an SSH password (or if not possible an SSH key).
v If the local user needs access to the management GUI, a password is needed for the user.
v If the user requires access to the command-line interface (CLI), a valid SSH key file is necessary and if
a user is working with both interfaces, both a password and SSH key must be used.
v Local users must be part of a user group that is defined on the system.
Remote users should also configure local credentials if they need to access the system when the remote
service is down. Remote users have their groups defined by the remote authentication service.
v For information about remote users with Tivoli Integrated Portal (TIP) support, see Configuring
remote authentication service with Tivoli Integrated Portal (TIP) using the CLI on page 71.
v For information about users with Lightweight Directory Access Protocol (LDAP), see Configuring
remote authentication service with Lightweight Directory Access Protocol (LDAP) using the CLI on
page 72.
v For more information about local and remote users, see Working with local and remote users on
page 8.
74 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Procedure
1. Issue the mkuser CLI command to create either a local or remote user to access Storwize V7000. For
example:
mkuser -name user_name -remote
where user_name specifies the name of the user. The remote parameter specifies that the user
authenticates to the remote authentication service.
mkuser -name user_name -usergrp group_name_or_id
where user_name specifies the name of the user and group_name_or_id specifies the name or ID of the
user group with which the local user is associated.
v
v The usergrp parameter specifies the user authenticates to the system by using system
authentication methods.
2. Issue the chuser CLI command to change the attributes of an existing user. For example:
chuser -usergrp group_id_or_name user_id_or_name
where the group_id_or_name specifies the new group for the user and user_id_or_name specifies the
user to be changed.
3. Issue the chcurrentuser CLI command to change the attributes of the current user. For example:
chcurrentuser -nokey
where the nokey parameter specifies that the SSH key of the user is to be deleted.
4. Issue the rmuser CLI command to delete a user: For example:
rmuser user_id_or_name
where user_id_or_name specifies the ID or name of the user view. If you do not specify an ID or name,
the concise view is displayed. If you do not specify a user ID or name, all users on the system are
displayed.
6. Issue the lscurrentuser CLI command to display the name and role of the logged-in user. For
example:
lscurrentuser
The name and the role of the user are displayed.
The notification settings apply to the entire cluster. You can specify the types of events that cause the
cluster to send a notification. The cluster sends a Simple Network Management Protocol (SNMP)
notification. The SNMP setting represents the type of notification.
SNMP is the standard protocol for managing networks and exchanging messages. SNMP enables the
SAN Volume Controller to send external messages that notify personnel about an event. You can use an
SNMP manager to view the messages that the SNMP agent sends.
Note: A valid community string can contain up to 60 letters or digits (most characters). A maximum of
six SNMP destinations can be specified.
In configurations that use SNMP, the SAN Volume Controller uses the notifications settings to call home
if errors occur. You must specify Error and send the trap to the IBM System Storage Productivity Center
or the master console if you want the SAN Volume Controller to call home when errors occur.
Procedure
1. To create a new SNMP server to receive notifications, use the mksnmpserver CLI command. For
example, enter one of the following commands:
mksnmpserver -ip 9.11.255.634
where 9.11.255.634 is the IP addresses for this server.
mksnmpserver -ip 9.11.255.634 -port remoteportnumber
where 9.11.255.634 is the IP addresses for this server and remoteportnumber is the port number for the
remote SNMP server.
2. To change the settings of an existing SNMP server, enter the chsnmpserver command. For example:
chsnmpserver -name newserver snmp_server_name_or_id
where newserver is the new name or ID of the server and snmp_server_name_or_id is the name or ID of
the server to be modified.
3. To remove an existing SNMP server from the system, enter the rmsnmpserver command. For example:
rmsnmpserver snmp_server_name_or_id
where snmp_server_name_or_id is either the name or the ID of the SNMP server to be deleted.
4. To display either a concise list or a detailed view of the SNMP servers that are detected by the cluster,
enter the lssnmpserver command. For example, to display a concise view, enter the following
command:
lssnmpserver -delim :
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an
IP network. The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify
personnel about an event. The system can transmit syslog messages in either expanded or concise format.
You can use a syslog manager to view the syslog messages that the system sends. The system uses the
User Datagram Protocol (UDP) to transmit the syslog message. You can specify up to a maximum of six
syslog servers.You can use the management GUI or the SAN Volume Controller command-line interface
to configure and modify your syslog settings.
76 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
The syslog event notification settings apply to the entire clustered system (system). You can specify the
types of events that cause the system to send a notification. The possible types of notifications are error,
warning, or information.
Note: Servers that are configured with facility values of 0 - 3 receive syslog messages in concise format.
Servers that are configured with facility values of 4 - 7 receive syslog messages in fully expanded format.
To configure and work with notification settings, use the following commands:
Procedure
1. Issue the mksyslogserver CLI command to specify the action that you want to take when a syslog
error or event is logged to the error log. For example, you can issue the following CLI command to
set up a syslog notification:
mksyslogserver mysyslogserver1 -ip 9.11.255.123
where mysyslogserver1 is the name given to the Syslog server definition and 9.11.255.123 is the
external Internet Protocol (IP) address of the syslog server.
2. To modify a syslog notification, issue the chsyslogserver command. For example:
chsyslogserver mysyslogserver1 -ip 9.11.255.123
where mysyslogserver1 is the name given to the Syslog server definition and 9.11.255.123 is the
external IP address of the syslog server.
3. To delete a syslog notification, issue the rmsyslogserver command. For example:
rmsyslogservfer mysyslogserver1 -force
4. To display either a concise list or a detailed view of syslog servers that are configured on the system,
issue the lssyslogserver command. For example, to display a concise view, enter the following
command:
lssyslogserver -delim :
To set up, manage, and activate email event and inventory notifications, complete the following steps:
Procedure
1. Enable your system to use the email notification function. To do this, issue the mkemailserver CLI
command. Up to six SMTP email servers can be configured to provide redundant access to the
external email network.
The following example creates an email server object. It specifies the name, IP address, and port
number of the SMTP email server. After you issue the command, you see a message that indicates
that the email server was successfully created.
mkemailserver -ip ip_address -port port_number
Note: Inventory information is automatically reported to IBM when you activate error reporting.
6. Optionally, test the email notification function to ensure that it is operating correctly and send an
inventory email notification. SAN Volume Controller uses the notifications settings to call home if
errors occur.
v To send a test email notification to one or more recipients, issue the testemail CLI command. You
must either specify all or the user ID or user name of an email recipient that you want to send a
test email to.
v To send an inventory email notification to all recipients that are enabled to receive inventory email
notifications, issue the sendinventoryemail CLI command. There are no parameters for this
command.
You can specify a server object that describes a remote Simple Mail Transfer Protocol (SMTP) email server
to receive event notifications from the clustered system. You can specify up to six servers to receive
notifications. To configure and work with email servers, use the following commands:
Procedure
1. Issue the mkemailserver CLI command to create an email server object that describes a remote Simple
Mail Transfer Protocol (SMTP) email server. For example, issue the following CLI command to set up
an email server:
78 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
mkemailserver -ip ip_address
where ip_address is the IP address of a remote email server. This must be a valid IPv4 or IPv6 address.
2. To change the parameters of an existing email server object, issue the chemailserver command. For
example:
chemailserver -ip ip_address email_server_name_or_id
where ip_address is the IP address of the email server object and email_server_name_or_id is the name or
ID of the server object to be changed.
3. To delete a specified email server object, issue the rmemailserver command. For example:
rmemailserver email_server_name_or_id
4. To display either a concise list or a detailed view of email servers that are configured on the system,
issue the lsemailserver command. For example, to display a concise view, enter the following
command:
lsemailserver -delim :
Passwords only affect the management GUI that accesses the clustered system. To restrict access to the
CLI, you must control the list of SSH client keys that are installed on the clustered system.
Perform the following steps to change the superuser and service passwords:
Procedure
1. Issue the following command to change the superuser password:
chuser -password superuser_password superuser
Where superuser_password is the new superuser password that you want to use.
2. Issue the following command to change the service password:
chsystem -servicepwd service_password
Where service_password is the new service password that you want to use.
Procedure
Issue the setlocale CLI command with the ID for the locale.
For example, issue the following CLI command to change the locale setting from US English to Japanese:
setlocale 3
Procedure
1. Issue the svcinfo lsfeaturedumps command to return a list of dumps in the /dumps/feature
destination directory. The feature log is maintained by the cluster. The feature log records events that
are generated when license parameters are entered or when the current license settings have been
breached.
2. Issue the svcservicemodeinfo lsfeaturedumps command to return a list of the files that exist of the
type specified on the given node.
Procedure
Issue the following CLI command to list error log entries by file type: lseventlog
Results
This command lists the error log entries. You can filter by type; for example, lseventlog -filtervalue
object_type=mdisk displays the error log by managed disks (MDisks).
You can display the whole log or filter the log so that only errors, events, or unfixed errors are displayed.
You can also request that the output is sorted either by error priority or by time. For error priority, the
most serious errors are the lowest-numbered errors. Therefore, the most serious errors are displayed first
in the table. For time, either the older or the latest entry can be displayed first in the output.
If you want to remove all input power to a clustered system (for example, the machine room power must
be shutdown for maintenance), you must shut down the system before the power is removed. If you do
not shut down the system before turning off input power to the uninterruptible power supply, the SAN
80 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Volume Controller nodes detect the loss of power and continue to run on battery power until all data
that is held in memory is saved to the internal disk drive. This increases the time that is required to make
the system operational when input power is restored and severely increases the time that is required to
recover from an unexpected loss of power that might occur before the uninterruptible power supply
batteries have fully recharged.
When input power is restored to the uninterruptible power supply units, they start to recharge. However,
the SAN Volume Controller nodes do not permit any I/O activity to be performed to the VDisks
(volumes) until the uninterruptible power supply is charged enough to enable all the data on the SAN
Volume Controller nodes to be saved in the event of an unexpected power loss. This might take as long
as two hours. Shutting down the system prior to removing input power to the uninterruptible power
supply units prevents the battery power from being drained and makes it possible for I/O activity to
resume as soon as input power is restored.
Before shutting down a system, quiesce all I/O operations that are destined for the system. Failure to do
so can result in failed I/O operations being reported to your host operating systems.
Attention: If you are shutting down the entire system, you lose access to all volumes that are provided
by this system. Shutting down the system also shuts down all SAN Volume Controller nodes. This
shutdown causes the hardened data to be dumped to the internal hard drive.
Begin the following process of quiescing all I/O to the system by stopping the applications on your hosts
that are using the volumes that are provided by the system.
1. Determine which hosts are using the volumes that are provided by the system.
2. Repeat the previous step for all volumes.
If input power is lost and subsequently restored, you must press the power button on the uninterruptible
power supply units before you press the power buttons on the SAN Volume Controller nodes.
Procedure
1. Issue the following command to shut down a clustered system:
stopsystem
This procedure is for upgrading from SAN Volume Controller version 6.1.0 or later. To upgrade from
version 5.1.x or earlier, see the relevant information center or publications that are available at this
website:
www.ibm.com/storage/support/2145
Procedure
1. Download, install, and run the latest version of the Software Upgrade Test Utility to verify that there
are no issues with the current clustered system environment. You can download the most current
version of this tool at the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000585
2. Download the SAN Volume Controller code from the www.ibm.com/storage/support/2145.
v If you want to write the SAN Volume Controller code to a CD, you must download the CD image.
v If you do not want to write the SAN Volume Controller code to a CD, you must download the
installation image.
3. Use PuTTY scp (pscp) to copy the upgrade files to the node.
4. Ensure that the upgrade file has been successfully copied.
Before you begin the upgrade, you must be aware of the following:
v The installation process fails under the following conditions:
If the code that is installed on the remote system is not compatible with the new code or if there
is an intersystem communication error that does not allow the system to check that the code is
compatible.
If any node in the system has a hardware type that is not supported by the new code.
If the SAN Volume Controller determines that one or more volumes in the system would be
taken offline by rebooting the nodes as part of the upgrade process. You can find details about
which volumes would be affected by using the lsdependentvdisks command. If you are
prepared to lose access to data during the upgrade, you can use the force flag to override this
restriction.
v The upgrade is distributed to all the nodes in the system by using internal connections between the
nodes.
v Nodes are updated one at a time.
v Nodes will run the new code concurrently with normal system activity.
v While the node is updated, it does not participate in I/O activity in the I/O group. As a result, all
I/O activity for the volumes in the I/O group is directed to the other node in the I/O group by the
host multipathing software.
v There is a 30-minute delay between node updates. The delay allows time for the host multipathing
software to rediscover paths to the nodes which have been upgraded, so that there is no loss of
access when another node in the I/O group is updated.
v The update is not committed until all nodes in the system have been successfully updated to the
new code level. If all nodes successfully restart with the new code level, the new level is
committed. When the new level is committed, the system vital product data (VPD) is updated to
reflect the new code level.
v You cannot invoke the new functions of the upgraded code until all member nodes are upgraded
and the update has been committed.
v Because the upgrade process takes some time, the installation command completes as soon as the
code level is verified by the system. To determine when the upgrade has completed, you must
82 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
either display the code level in the system VPD or look for the Software upgrade complete event
in the error/event log. If any node fails to restart with the new code level or fails at any other time
during the process, the code level is backed off.
v During an upgrade, the version number of each node is updated when the code has been installed
and the node has been restarted. The system code version number is updated when the new code
level is committed.
v When the upgrade starts an entry is made in the error or event log and another entry is made
when the upgrade completes or fails.
5. Issue the following CLI command to start the upgrade process:
applysoftware -file software_upgrade_file
where software_upgrade_file is the name of the code upgrade file.If the system identifies any volumes
that would go offline as a result of rebooting the nodes as part of the system upgrade, the code
upgrade does not start. An optional force parameter can be used to indicate that the upgrade
continue in spite of the problem identified. Use the lsdependentvdisks command to identify the cause
for the failed upgrade. If you use the force parameter, you are prompted to confirm that you want to
continue. The behavior of the force parameter has changed, and it is no longer required when
applying an upgrade to a system with errors in the event log.
6. Issue the following CLI command to check the status of the code upgrade process:
lssoftwareupgradestatus
Note: If a status of stalled_non_redundant is displayed, proceeding with the remaining set of node
upgrades might result in offline volumes. Contact an IBM service representative to complete the
upgrade.
7. To verify that the upgrade successfully completed, issue the lsnodevpd CLI command for each node
that is in the system. The code version field displays the new code level.
Results
When a new code level is applied, it is automatically installed on all the nodes that are in the system.
Use the lsdumps command with the optional prefix parameter to specify a directory. If you do not
specify a directory, /dumps is used as the default. Use the optional node_id_or_name parameter to specify
the node to list the available dumps for. If you do not specify a node, the available dumps on the
configuration node are listed.
An audit log keeps track of action commands that are issued through an SSH session or from the
management GUI. To list a specified number of the most recently audited commands, issue the
catauditlog command. To dump the contents of the audit log to a file on the current configuration node,
issue the dumpauditlog command. This command also clears the contents of the audit log.
Dumps contained in the /dumps/cimom directory are created by the CIMOM (Common Information Model
Object Manager) that runs on the clustered system (system). These files are produced during normal
operations of the CIMOM.
Dumps that are contained in the /dumps/elogs directory are dumps of the contents of the error and event
log at the time that the dump was taken. An error or event log dump is created by using the dumperrlog
command. This dumps the contents of the error or event log to the /dumps/elogs directory. If no file
name prefix is supplied, the default errlog_ is used. The full default file name is
errlog_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name. If the command
is used with the -prefix parameter, the prefix value is used instead of errlog.
Dumps contained in the /dumps/feature directory are dumps of the featurization log. A featurization log
dump is created by using the dumpinternallog command. This dumps the contents of the featurization
log to the /dumps/feature directory to a file called feature.txt. Only one of these files exists, so every
time the dumpinternallog command is run, this file is overwritten.
Dumps that are contained in the /dumps/iostats directory are dumps of the per-node I/O statistics for
disks on the system. An I/O statistics dump is created by using the startstats command. As part of this
command, you can specify a time interval for the statistics to be written to the file; the default is 15
minutes. Every time the time interval is encountered, the I/O statistics that have been collected are
written to a file in the /dumps/iostats directory. The file names that are used for storing I/O statistics
dumps are Nm_stats_NNNNNN_YYMMDD_HHMMSS, Nv_stats_NNNNNN_YYMMDD_HHMMSS,
Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of data
that is traced depends on the options specified by the settrace command. The collection of the I/O trace
data is started by using the starttrace command. The I/O trace data collection is stopped when the
stoptrace command is used. It is when the trace is stopped that the data is written to the file. The file
name is prefix_NNNNNN_YYMMDD_HHMMSS, where prefix is the value entered for the filename
parameter in the settrace command, and NNNNNN is the node name.
Dumps that are contained in the /dumps/mdisk directory are copies of solid-state drive (SSD) MDisk
internal logs. These dumps are created using the triggerdrivedump command. The file name is
mdiskdump_NNNNNN_MMMM_YYMMDD_HHMMSS, where NNNNNN is the name of the node that
contains the MDisk, and MMMM is the decimal ID of the MDisk.
Software upgrade packages are contained in the /home/admin/upgrade directory. These directories exist on
every node in the system.
Dumps of support data from a disk drive are contained in the /dumps/drive directory. This data can help
to identify problems with the drive, and does not contain any data that applications may have written to
the drive.
Dumps that are contained in the /dumps directory result from application abends. Such dumps are written
to the /dumps directory. The default file names are dump.NNNNNN.YYMMDD.HHMMSS, where NNNNNN
is the node front panel name. In addition to the dump file, there might be some trace files written to this
directory that are named NNNNNN.trc.
Because files can only be copied from the current configuration node (using secure copy), you can issue
the cpdumps command to copy the files from a nonconfiguration node to the current configuration node.
86 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 5. Array commands
Array commands capture information that can assist you with managing arrays.
charray
Use the charray command to change array attributes.
Syntax
charray
-name new_name_arg -sparegoal 0-100 -balanced
mdisk_id | mdisk_name
Parameters
-name
(Optional) The new name to apply to the array MDisk.
-sparegoal
(Optional) Sets the number of spares to protect the array members with.
-balanced
(Optional) Forces the array to balance and configure the spare goals of the present drives.
mdisk_id
Identifies (by ID) which array the MDisk command applies to.
mdisk_name
Identifies (by user-defined name) which array the MDisk command applies to.
Description
Invocation examples
charray -name raid6mdisk0 0
charray -sparegoal 2 mdisk52
charray -balanced 3
No feedback
charraymember
Use the charraymember command to modify an array member's attributes, or to swap a member of a
RAID array with that of another drive.
Parameters
-member
Identifies the array member index to operate on.
-balanced
(Optional) Forces the array member spare goals to be set to the:
v Present array member goals
v Existing exchange goals
v The newDrive goals
-newdrive
(Optional) Identifies the drive to add to the array.
-immediate
(Optional) Specifies that the old disk is to be immediately removed from the array, and the new disk
rebuilt. If you do not choose this option, exchange is used; this preserves redundancy during the
rebuild.
-unbalanced
(Optional) Forces the array member to change if the newDrive does not meet array member goals.
mdisk_id
(Either the ID or the name is required.) Identifies which ID array the MDisk command applies to.
mdisk_name
(Either the ID or the name is required.) Identifies which name array the MDisk command applies to.
Description
This command modifies an array member's attributes, or to swap a member of a RAID array with that of
another drive. Table 9 shows the command combination options.
Table 9. charraymember combination options
Option Description
-balanced v Member goals are set to the properties of the existing member or exchange drive.
v The command will fail if the member is not populated with a drive.
v Member goals are set to the properties of the current member drives being
exchanged into the array count as members.
v If no exchange exists, the existing member drive goals are used.
-newdrive drive_id v The command processes the exchange, and does NOT update the member goals.
v You must specify a new drive that is an exact match for the member goals.
v The command will fail if the drive is not an exact match.
-newdrive drive_id -balanced The processes the exchange and updates the member goals to the properties of the
new drive.
88 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 9. charraymember combination options (continued)
Option Description
-newdrive drive_id v The command processes the exchange and does NOT update the member goals.
-unbalanced
v This is only permitted when the array is degraded and the member is empty.
v This means -immediate is mute, the exchange will always be immediate.
v Later, if drives are a sufficient member goal match, the array rebalance will select
those drives.
v A balancing exchange will restart the member goals.
An invocation example
An invocation example
To swap a spare/candidate drive for a member 1 drive and start component rebuild for the new member:
charraymember -member 1 -newdrive 3 -immediate mdisk3
An invocation example
To swap in a spare/candidate drive for member index 2. If there is an drive present there then the
exchange is:
charraymember -member 2 -newdrive 4 mdisk4
An invocation example
An invocation example
To force an exchange and make the array change its goals to the new drive:
charraymember -member 3 -newdrive 9 -balanced mdisk5
An invocation example
To force an unbalancing exchange when drive 8 does not match the goals:
charraymember -member 2 -newdrive 8 -unbalanced mdisk5
An invocation example
To force an immediate exchange and make the array change its goals to the new drive:
charraymember -member 3 -newdrive 9 -balanced -immediate mdisk5
lsarray
Use the lsarray command to list the array MDisks.
mdisk id mdisk_name
Parameters
-filtervalue attribute=value
(Optional) Specifies a list of one or more filter attributes matching the specified values; see
-filtervalue? for the supported attributes. Only objects with a value that matches the filter attribute
value are returned. If capacity is specified, the units must also be included. Use the unit parameter to
interpret the value for size or capacity.
Note: Some filters allow the use of a wildcard when entering the command. The following rules
apply to the use of wildcards with the SAN Volume Controller CLI:
v The wildcard character is an asterisk (*).
v The command can contain a maximum of one wildcard, which must be the first or last character in
the string.
v When using a wildcard character, you must enclose the filter entry within double quotation marks
(""), as follows:
lsarray -filtervalue "name=md*"
-filtervalue?
(Optional) Includes all of the valid filter attributes in the report. The following filter attributes are
valid for the lsarray command:
v mdisk_id
v mdisk_name
v status
v mode
v mdisk_grp_id
v mdisk_grp_name
v capacity
v fast_write_state
v raid_status
v raid_level
v redundancy
v strip_size
v write_verify
v spare_goal
v spare_protection_min
v balanced
v tier
90 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Any parameters specified with the -filtervalue? parameter are ignored.
4 For more information about filtering attributes, see Attributes of the -filtervalue parameters on
4 page xxv.
-bytes
(Optional) Requests output of capacities in bytes (instead of rounded values).
mdisk_id
(Optional) The identity of the array MDisk.
mdisk_name
(Optional) The MDisk name that you provided.
Description
This command returns a concise list or a detailed view of array MDisks visible to the clustered system
(system). The lsmdisk command provides the potential output for array MDisks.
Table 10. MDisk output
Attribute Values
status v online
v offline
v excluded
v degraded (applies only to internal MDisks)
mode unmanaged, managed, image, array
quorum_index 0, 1, 2, or blank if the MDisk is not being used as a quorum disk
block_size 512 bytes (or blank) in each block of storage
ctrl_type 4, 6, where 6 is a solid-state drive (SSD) attached inside a node and 4 is any other
device
tier The tier this MDisk has been assigned to by auto-detection (for internal arrays) or
by the user:
v generic_ssd
v generic_hdd (the default value for newly discovered or external MDisk)
Note: You can change this value using the chmdisk command.
raid_status v offline - the array is offline on all nodes
v degraded - the array has deconfigured or offline members; the array is not fully
redundant
v syncing - array members are all online, the array is syncing parity or mirrors to
achieve redundancy
v initting - array members are all online, the array is initializing; the array is fully
redundant
v online - array members are all online, and the array is fully redundant
raid_level The RAID level of the array (RAID0, RAID1, RAID5, RAID6, RAID10).
redundancy The number of how many member disks can fail before the array fails.
strip_size The strip size of the array (in KB).
spare_goal The number of spares that the array members should be protected by.
spare_protection_min The minimum number of spares that an array member is protected by.
92 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
redundancy:0
strip_size:256
spare_goal:2
spare_protection_min:2
balanced:yes
tier:generic_hdd
lsarrayinitprogress
Use the lsarrayinitprogress command to view the progress of array background initialization that
occurs after creation.
Syntax
lsarrayinitprogress
-nohdr -delim delimiter mdisk id | mdisk_name
Parameters
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. The -nohdr parameter suppresses the display of these
headings.
Description
This command shows the progress of array background initialization. Table 11 shows possible outputs.
Table 11. lsarrayinitprogress output
Attribute Value
progress The percentage of initialization task that has been completed.
estimated_completion_time The expected initialization task completion time (YYMMDDHHMMSS).
lsarraylba
Use the lsarraylba command to permit an array logical block address (LBA) to be found from a drive
and LBA.
Syntax
lsarraylba -drivelba lba -drive drive_id
-delim delimiter
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
-drivelba
The LBA on the drive to convert to the array LBA. The LBA must be specified in hex, with a 0x
prefix.
-drive
The ID of the drive to view.
Description
This command permits an array LBA to be found on a drive and LBA. Table 12 on page 95 shows
possible outputs.
94 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 12. lsarraylba output
Attribute Value
type The type of MDisk extent allocation:
v allocated
v unallocated
mdisk_lba The LBA on the array MDisk (blank if none).
mdisk_start The start of range of LBAs (strip) on the array MDisk (blank if none).
mdisk_end The end of range of LBAs (strip) on the array MDisk (blank if none).
drive_start The start of range of LBAs (strip) on the drive (blank if none).
drive_end The end of range of LBAs (strip) on the drive (blank if none).
This example demonstrates how drive 2 LBA -xff maps to MDisk 2 LBA 0xff.
An invocation example
lsarraylba -delim : -drivelba 0xff -drive 2
lsarraymember
Use the lsarraymember command to list the member drives of one or more array MDisks.
Syntax
lsarraymember
-delim delimiter -bytes mdisk_id mdisk_name
Parameters
-delim delimiter
(Optional) By default, in a concise view all columns of data are space-separated, with the width of
each column set to the maximum possible width of each item of data. In a detailed view, each item of
data is an individual row, and if displaying headers, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. Enter -delim : on the command line, and the colon character (:) separates all
items of data in a concise view (for example, the spacing of columns does not occur); in a detailed
view, the specified delimiter separates the data from its header
-bytes
(Optional) Requests output of capacities in bytes (instead of rounded values).
mdisk_id
(Optional) The identity of the array MDisk.
mdisk_name
(Optional) The MDisk name that you provided.
This command lists the member drives of one or more array MDisks. It describes positions within an
array unoccupied by a drive. The positions determine how mirroring the RAIDs takes place; for example,
determining if x is mirrored to y for RAID-10, where parity starts from RAID-5.
96 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
An invocation example (two arrays)
lsarraymember -delim :
lsarraymembergoals
Use the lsarraymembergoals command to list the spare goals for member drives of one or more array
MDisks.
Syntax
lsarraymembergoals
-delim delimiter -bytes mdisk_ id mdisk_name
Parameters
-delim delimiter
(Optional) By default, in a concise view all columns of data are space-separated, with the width of
each column set to the maximum possible width of each item of data. In a detailed view, each item of
data is an individual row, and if displaying headers, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. Enter -delim : on the command line, and the colon character (:) separates all
items of data in a concise view (for example, the spacing of columns does not occur); in a detailed
view, the data is separated from its header by the specified delimiter .
-bytes
(Optional) Requests output of capacities in bytes (instead of rounded values).
mdisk_id
(Optional) The identity of the array MDisk.
Description
This command lists the spare goals for member drives of one or more array MDisks. Table 14 provides
the potential output for this command.
Table 14. lsarraymembergoals output
Attribute Values
member_id The ID of the array member which represents the drive order in the RAID array.
drive_id The ID of the drive for the member ID (blank if none is configured).
capacity_goal The capacity goal for the array member (same for all members in the array).
tech_type_goal The technology goal for the array member:
v sas_ssd
v sas_hdd
v sas_nearline_hdd
RPM_goal The RPM goal for array member (blank for SSDs).
enclosure_id_goal The ID of the member enclosure goal (blank if any can be selected).
slot_id_goal The ID of the member slot goal.
node_id_goal The node ID of the goal.
An invocation example (a four-member RAID 10 SAS array that is split across chains)
lsarraymembergoals -delim : mdisk2
An invocation example (a four-member RAID 0 SAS array contained within a single enclosure)
lsarraymembergoals -delim : mdisk4
98 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
mdisk_id:mdisk_name:member_id:drive_id:capacity_goal:
tech_type_goal:RPM_goal:enclosure_id_goal:slot_id_goal
2:mdisk2:0:0:222.0GB:sas_nearline_hdd:15000:1:1
2:mdisk2:1:1:222.0GB:sas_nearline_hdd:15000:1:2
2:mdisk2:2:2:222.0GB:sas_nearline_hdd:15000:1:3
2:mdisk2:3:3:222.0GB:sas_nearline_hdd:15000:1:4
lsarraymemberprogress
Use the lsarraymemberprogress command to display array member background process status.
Syntax
lsarraymemberprogress
-nohdr -delim delimiter
mdisk id mdisk_name
Parameters
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. The -nohdr parameter suppresses the display of these
headings.
Description
This command displays array member background process status. Exchange cannot start on a rebuilding
member because both component rebuild and exchange are shown in the same view. Table 15 provides
the potential output for this command.
Table 15. lsarraymemberprogress output
Attribute Value
member_id The array member index.
drive_id The ID of the drive.
task The identity of task:
v rebuild
v exchange
lsarraysyncprogress
Use the lsarraysyncprogress command to display how synchronized a RAID array is.
Syntax
lsarraysyncprogress
-nohdr -delim delimiter mdisk_id
mdisk_name
Parameters
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. The -nohdr parameter suppresses the display of these
headings.
100 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
mdisk_id
(Optional) The ID of the MDisk you want to view.
mdisk_name
(Optional) The user-defined name of the MDisk you want to view.
Description
This command shows you how synchronized a RAID array is. It includes internal activity that is working
toward a fully synchronized array. Table 16 provides the potential output.
Table 16. lsarraysyncprogress output
Attribute Value
progress The percentage of the array that is synchronized.
estimated_completion_time The expected synchronization completion time (YYMMDDHHMMSS; blank if
completion time unknown).
A concise view (qualified with mdisk id for in sync mdisk10) invocation example
lsarraysyncprogress delim : mdisk10
mkarray
Use the mkarray command to create an MDisk array and add it to a storage pool.
Syntax
mkarray -level raid0 -drive drive_id_list
raid1 -strip 128
raid10 256
mdiskgrp_id
-sparegoal 0-(MAX_DRIVES-1) -name new_name_arg mdiskgrp_name
Restriction: RAID-5 and RAID-6 are for Storwize V7000, Flex V7000 Storage Node, Storwize V3500,
and Storwize V3700 products.
-drive
Identifies the drive or drives to use as members of the RAID array.
Drives are specified as a sequence of mirrored drive pairs. For example, if an array is created with
-drive a:b:c:d, drive b contains the mirror copy of drive a, and drive d contains the mirror copy of
drive c.
Restriction: The following restriction applies to any pair-based array levels for drives located in
nodes instead of enclosures.
v RAID-0: All drives in a RAID-0 array of internal drives must be located in the same node.
v RAID-1: The pair of drives must contain one drive from one node in the I/O group, and one drive
from the other node.
v RAID-10: The drives are specified as a sequence of drive pairs. Each pair of drives must contain
one drive from a node in the I/O group, and a drive from the other node.
-strip
(Optional) Sets strip size (in kilobytes) for the array MDisk being created. The default is 256 KB.
-sparegoal
(Optional) Sets the number of spares that this array's members should be protected by. The default is
1 (except for RAID 0 arrays, which have a default of 0).
-name
(Optional) Specifies the name to which you want to apply the array MDisk.
mdiskgrp_id
Identifies the storage pool (by ID) to which you want to add the created array mdisk.
mdiskgrp_name
Identifies storage pool (by the user-defined name) to which you want to add created array MDisks.
Description
This command creates an array MDisk RAID array and adds it to an storage pool. Although the array
tier is automatically determined, you can change it later using the chmdisk command.
Standard output
MDisk, id [x], successfully created
102 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
mkarray -level raid0 -drive 0:1:2:3 raid0grp
recoverarray
Use the recoverarray command to recover a specific corrupt array in a dead domain scenario.
Syntax
recoverarray mdisk_id mdisk_name
Parameters
mdisk_id
Identifies (by ID) the specific array to recover.
mdisk_name
Identifies (by user-assigned name) the specific array to recover.
Description
This command recovers a specific corrupt array. An array has metadata representing ongoing/pending
platform writes, which are lost when the domain nodes are lost.
An invocation example
recoverarray mdisk1
recoverarraybycluster
Attention: The recoverarraybycluster command has been discontinued. Use the recoverarraybysystem
command instead.
recoverarraybysystem
Use the recoverarraybysystem command to recover corrupt arrays in a dead domain scenario.
Syntax
recoverarraybysystem
None.
Description
Use the recoverarraybysystem command to recover corrupt arrays in a dead domain scenario.
An invocation example
recoverarraybysystem
rmarray
Use the rmarray command to remove an array MDisk from the configuration.
Syntax
rmarray -mdisk mdisk_id_list mdisk_group_id
mdisk_name_list -force mdisk_group_name
Parameters
-mdisk
Identifies the array MDisk or MDisks to remove from the storage pool.
-force
(Optional) Forces a remove when the MDisk has allocated extents by migrating the used extents to
free extents in the storage pool.
mdiskgrp_id
Identifies (by ID) the MDisk group to remove the created array MDisk from.
mdiskgrp_name
Identifies (by user-defined name) the MDisk group to remove the created array MDisk from.
Description
This command removes an array MDisk from the configuration. Each array is divided into candidate
drives.
An invocation example
rmarray -mdisk 6 mdiskgrp10
104 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 6. Audit log commands
An audit log keeps track of action commands that are issued through a Secure Shell (SSH) session or
through the management GUI.
catauditlog
Use the catauditlog command to display the in-memory contents of the audit log.
Syntax
catauditlog
-delim delimiter -first number_of_entries_to_return
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
-first number_of_entries_to_return
(Optional) Specifies the number of most recent entries to display.
This command lists a specified number of the most recently audited commands.
The in-memory portion of the audit log holds approximately 1 MB of audit information. Depending on
the command text size and the number of parameters, this equals 1 MB records or approximately 6000
commands.
Once the in-memory audit log reaches maximum capacity, the log is written to a local file on the
configuration node in the /dumps/audit directory. The catauditlog command only displays the
in-memory part of the audit log; the on-disk part of the audit log is in readable text format and does not
require any special command to decode it.
The in-memory log entries are reset and cleared automatically, ready to accumulate new commands. The
on-disk portion of the audit log can then be analyzed at a later date.
The lsdumps on page 242 command with -prefix /dumps/audit can be used to list the files on the
disk.
As commands are executed they are recorded in the in-memory audit log. When the in-memory audit log
becomes full it is automatically dumped to an audit log file and the in-memory audit log is cleared.
Use the this command to display the in-memory audit log. Use the dumpauditlogcommand to
manually dump the contents of the in-memory audit log to a file on the current configuration node and
clear the contents of the in-memory audit log
This example lists the five most recent audit log entries.
An invocation example
catauditlog -delim : -first 5
dumpauditlog
Use the dumpauditlog command to reset or clear the contents of the in-memory audit log. The contents
of the audit log are sent to a file in the/dumps/audit directory on the current configuration node.
Syntax
dumpauditlog
Parameters
106 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Description
This command dumps the contents of the audit log to a file on the current configuration node. It also
clears the contents of the audit log. This command is logged as the first entry in the new audit log.
Audit log dumps are automatically maintained in the /dumps/audit directory. The local file system space
is used by audit log dumps and is limited to 200 MB on any node in the clustered system. The space
limit is maintained automatically by deleting the minimum number of old audit log dump files so that
the /dumps/audit directory space is reduced below 200 MB. This deletion occurs once per day on every
node in the system. The oldest audit log dump files are considered to be the ones with the lowest audit
log sequence number. Also, audit log dump files with a clustered system ID number that does not match
the current one are considered to be older than files that match the system ID, regardless of sequence
number.
Other than by running dumps (or copying dump files among nodes), you cannot alter the contents of the
audit directory. Each dump file name is generated automatically in the following format:
auditlog_firstseq_lastseq_timestamp_systemid
where
v firstseq is the audit log sequence number of the first entry in the log
v lastseq is the audit sequence number of the last entry in the log
v timestamp is the timestamp of the last entry in the audit log that is being dumped
v systemid is the system ID at the time that the dump was created
The audit log dump files names cannot be changed.
The audit log entries in the dump files contain the same information as displayed by the catauditlog
command; however, the dumpauditlog command displays the information with one field per line. The
lsdumps on page 242 command displays a list of the audit log dumps that are available on the nodes
in the clustered system.
Use this command to manually dump the contents of the in-memory audit log to a file on the current
configuration node and clear the contents of the in-memory audit log. Use the catauditlog on page 105
command to display the in-memory audit log.
An invocation example
dumpauditlog
lsauditlogdumps (Deprecated)
Attention: The lsauditlogdumps command is deprecated. Use the lsdumps command to display a list of
files in a particular dumps directory.
Deprecated.
backup
Use the backup command to back up the configuration. Enter this command any time after creating
clustered system (system).
Syntax
svcconfig backup
-quiet off
-v on
Parameters
-quiet
Suppresses standard output (STDOUT) messages from the console.
-v on | off
Displays normal (off, the default state) or verbose (on) command messages.
Description
The backup command extracts and stores configuration information from the system. The backup
command produces the svc.config.backup.xml, svc.config.backup.sh, and svcconfig.backup.log files, and
saves them in the /tmp folder. The .xml file contains the extracted configuration information; the .sh file
contains a script of the commands used to determine the configuration information; and the .log file
contains details about command usage.
The underscore character (_) prefix is reserved for backup and restore command usage; do not use the
underscore character in any object names.
An invocation example
svcconfig backup
clear
Use the clear command to erase files in the /tmp directory that were previously produced by other
svcconfig commands. You can enter this command any time after a clustered system (system) has been
created.
Parameters
-all
Erases all configuration files.
-q | quiet
Suppresses console output (STDOUT).
-v on | off
Produces verbose output (on); the default is regular output (off).
Description
You can use the svcconfig clear command without the -all parameter to erase files of the form:
/tmp/svc.config*.sh
/tmp/svc.config*.log
You can use the svcconfig clear command with the -all parameter to erase files of the form:
/tmp/svc.config*.sh
/tmp/svc.config*.log
/tmp/svc.config*.xml
/tmp/svc.config*.bak
An invocation example
svcconfig clear -all
help
Use the help command to obtain summary information about the syntax of the svcconfig command. You
can enter this command any time after a clustered system (system) has been created.
Syntax
svcconfig -ver -?
backup -h
clear
restore
Parameters
-ver
Returns the version number for the svcconfig command.
(action) -h | -?
Provides command help: the possible values for (action) are backup, clear, and restore.
110 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-h | -?
Provides general help.
Description
An invocation example
svcconfig -ver
svcconfig -?
svcconfig backup -h
restore
Use the restore command to restore the clustered system (system) to its previous configuration. This
command uses the configuration files in the /tmp folder .
Syntax
svcconfig restore
-f -q -prepare
-force -quiet -fmt
-fmtdisk
-execute
-fmt
-fmtdisk
off
-v on
Parameters
-f | force
Forces continued processing where possible.
-q | quiet
Suppresses console output (STDOUT).
-prepare
Verifies the current configuration against the information in svc.config.backup.xml; then prepares
commands for processing in svc.config.restore.sh, and then produces a log of events in
svc.config.restore.prepare.
-fmt | fmtdisk
Includes the -fmtdisk option on all mkvdisk commands to be issued.
The restore command restores the target system configuration from the svc.config.backup.xml file in the
/tmp folder. If neither the -prepare nor the -execute option is specified, the command performs both
phases in sequence, producing only a single event log: svc.config.restore.log.
The restore operation is also known as a T4 (Tier 4) Recovery, and should only be used on a system
having just been started. The restore operation should not be used on a system having any nonautomatic
objects configured, such as MDisk groups (storage pools) or VDisks (volumes).
The command pauses for eight minutes if any nodes are added during this process, informing the user of
this at run-time.
An invocation example
svcconfig restore -prepare -fmt
svcconfig restore -execute
svcconfig restore
112 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 8. Clustered system commands
Clustered system commands are used to monitor and modify clustered systems.
Syntax
addnode -panelname panel_name
-wwnodename wwnn_arg -name new_name_arg
-iogrp iogroup_name
iogroup_id
Parameters
-panelname panel_name
(Required if you do not specify the -wwnodename parameter) Specifies the node that you want to
add to a system by the name that is displayed on the display panel. You cannot use this parameter
with the -wwnodename parameter.
-wwnodename wwnn_arg
(Required if you do not specify the -panelname parameter) Specifies the node that you want to add
to the system by the worldwide node name (WWNN). You cannot use this parameter with the
-panelname parameter.
-name new_name_arg
(Optional) Specifies a name for the node that you want to add to the system. You can use this name
in subsequent commands to refer to the node, instead of using the node ID.
Note: Node names supplied with the -name parameter on the addnode and chnode commands must
not already be in use as node names or as node failover_names.
If you assign a name, this name is displayed as the node name from then on. If you do not assign a
name, a default name is used. The default name that is used depends on whether the node is
replacing one that has previously been deleted. When a node is deleted, its name is retained in the
I/O group as the failover name of its partner node. If no nodes remain in an I/O group, no failover
names are retained. Only one failover name can be stored for each node. If you add a node into an
I/O group that has a retained failover name and do not specify a node name, the retained failover
name is assigned to this node. If you do not specify a name and there is no retained failover name,
the name assigned has the format nodeX.
Important: The iSCSI Qualified Name (IQN) for each node is generated using the system and node
names. If you are using the iSCSI protocol and the target name for this node is already active on its
partner node, and iSCSI hosts are attached to it. Adding the node with a different name changes the
IQN of this node in the system and might require reconfiguration of all iSCSI-attached hosts.
-iogrp iogroup_name | iogroup_id
(Required) Specifies the I/O group to which you want to add this node.
Note: The addnode command is a SAN Volume Controller command. For Storwize V7000, use the
addcontrolenclosure command.
This command adds a new node to the system. You can obtain a list of candidate nodes (those that are
not already assigned to a system) by typing lsnodecandidate.
Note: The lsnodecandidate command is a SAN Volume Controller command. For Storwize V7000, use
the lscontrolenclosurecandidate command.
Note: This command is successful only if the node-enclosure system ID matches the system, or is blank.
Before you add a node to the system, you must check to see if any of the following conditions are true. If
the following conditions exist, failure to follow the procedures that are documented here might result in
the corruption of all data that is managed by the system.
v Is the new node being used to replace a failed node in the system?
v Does the node being added to the system use physical node hardware that has been used as a node in
another system, and are both system recognized by the same hosts?
If any of the previous conditions are true, you must take the following actions:
1. Add the node to the same I/O group that it was previously in. You can use the command-line
interface command lsnode or the management GUI to determine the WWNN of the system nodes.
2. Shut down all of the hosts that use the system, before you add the node back into the system.
3. Add the node back to the system before the hosts are restarted. If the I/O group information is
unavailable or it is inconvenient to shut down and restart all of the hosts that use the system, you can
do the following:
a. On all of the hosts that are connected to the system, unconfigure the Fibre Channel adapter device
driver, the disk device driver, and the multipathing driver before you add the node to the system.
b. Add the node to the system, and then reconfigure the Fibre Channel adapter device driver, the
disk device driver, and multipathing driver.
If you are adding a new node to a system, take the following actions:
1. Ensure that the model type of the new node is supported by the SAN Volume Controller of code for
the system. If the model type is not supported by the system code, you must upgrade the system to a
version of code that supports the model type of the new node.
2. Record the node serial number, the WWNN, all WWPNs, and the I/O group to which the node has
been added. You might need to use this information later. Having it available can prevent possible
data corruption if the node must be removed from and re-added to the clustered system.
When you add a node to the system using the addnode command or the system GUI, you must confirm
whether the node has previously been a member of the system. If it has, follow one of these two
procedures:
v Add the node to the same I/O group that it was previously in. You can determine the WWNN of the
nodes in the system using the lsnode command.
v If you cannot determine the WWNN of the nodes in the cluster, call the support team to add the node
back into the system without corrupting the data.
When a node is added to a system, it displays a state of adding. It can take as long as 30 minutes for the
node to be added to the system, particularly if the version of code the node has changed.
114 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Attention: If the node remains in the adding state for more than 30 minutes, contact your support
representative to assist you in resolving this issue.
When a node is deleted, its name is retained in an I/O group as the failover name of its partner node. If
no nodes remain in an I/O group, no failover names are retained. The addnode command fails if you
specify a name that is either an existing node name or a retained failover name. Specify a different name
for the node being added.
An invocation example
addnode -wwnodename 5005076801e08b -iogrp io_grp0
cfgportip
Use the cfgportip command to assign an Internet Protocol (IP) address to each node ethernet port for
Internet Small Computer System Interface (iSCSI) input/output (I/O).
Syntax
For Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6):
cfgportip -node node_name -ip ipv4addr -mask subnet_mask -gw ipv4gw port_id
node_id -ip_6 ipv6addr prefix_6 prefix -gw_6 ipv6gw -failover
Parameters
-node node_name | node_id
(Required) Specifies which node has the ethernet port that the IP address is being assigned to.
Note: This parameter is required for setting a port IP address. It cannot be used with the -mtu
parameter.
-ip ipv4addr
(Required) Sets the Internet Protocol Version 4 (IPv4) address for the ethernet port. You cannot use
this parameter with the ip_6 parameter.
-ip_6 ipv6addr
(Required) Sets the Internet Protocol Version 6 (IPv6) address for the ethernet port. You cannot use
this parameter with the ip parameter.
-gw ipv4addr
(Required) Sets the IPv4 gateway IP address. You cannot use this parameter with the gw_6 parameter.
-gw_6 ipv6gw
(Required) Sets the IPv6 default gateway address for the port. You cannot use this parameter with the
gw parameter.
-mask subnet_mask
(Required) Sets the IPv4 subnet mask. You cannot use this parameter with the prefix_6 parameter.
Description
The cfgportip command either sets the IP address of an Ethernet port for iSCSI, or configures the MTU
of a group of ports. This command assigns either an IPv4 or IPv6 address to a specified Ethernet port of
a node. The IP address is used for iSCSI I/O. Use the chsystemip command to assign clustered system IP
addresses.
For an IPv4 address, the ip, mask, and gw parameters are required. All of the IPv4 IP parameters must be
specified to assign an IPv4 address to an ethernet port.
For an IPv6 address, the ip_6, prefix_6, and gw_6 parameters are required. All of the IPv6 IP parameters
must be specified to assign an IPv6 address to an ethernet port.
Use the lsportip command with the optional ethernet_port_id parameter to list the port IP addresses
for the specified port.
116 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
An invocation example to set the MTU to its default value
cfgportip defaultmtu -iogrp 0 1
chcluster
The chcluster command has been discontinued. Use the chsystem command instead.
chsystem
Use the chsystem command to modify the attributes of an existing clustered system. Enter this command
any time after a system has been created. All the parameters that are associated with this command are
optional. However, you must specify one or more parameters with this command.
Syntax
chsystem
-name system_name -consoleip console_ip_address
-rcbuffersize new size in MB -speed fabric_speed
-alias id_alias -invemailinterval interval
-gmlinktolerance link_tolerance -gmmaxhostdelay max_host_delay
-gminterdelaysimulation inter_system_delay_simulation
-gmintradelaysimulation intra_system_delay_simulation
-icatip ipv4_icat_ip_address -icatip_6 ipv6_icat_ip_address
-ntpip ipv4_ntp_ip_address -ntpip_6 ipv6_ntp_ip_address
-isnsip sns_server_address -isnsip_6 ipv6_sns_server_address
-relationshipbandwidthlimit bandwidth_in_mbps -infocenterurl url
-iscsiauthmethod none -chapsecret chap_secret -layer replication | storage
chap -nochapsecret
-cacheprefetch on
off -regensslcert
Parameters
-name system_name
(Optional) Specifies a new name for the system.
Important: The console Internet Protocol (IP) is restored in T3 as part of the configuration process.
The console IP address is automatically set to the system IP. If an Internet Protocol Version 4 (IPv4)
address has been set for system port 1 it is used (otherwise, an IPv6 address is used). If the console
IP address is set it overrides this automatic default. If you set the console IP, issuing chsystemip does
not change the console IP. If the console IP is the same as the system port 1 IP, issuing chsystemip
continues to change the console IP. Setting the consoleip to 0.0.0.0 resets it to the system port 1 IP
address.
-rcbuffersize new size in MB
Specifies the size (in megabytes) of the resource pool. If you specify this parameter, the clustered
system must have more than 8 gigabytes (GB), or 8192 megabytes (MB), of random access memory
(RAM) reported by node vital product data (VPD). If there is not 8 GB or 8192 MB, an error message
is generated.
-speed fabric_speed
(Optional) Specifies the speed of the fabric to which this system is attached. Valid values are 1 or 2
(GB).
Attention: Changing the speed on a running system breaks I/O service to the attached hosts. Before
changing the fabric speed, stop I/O from active hosts and force these hosts to flush any cached data
by unmounting volumes (for UNIX host types) or by removing drive letters (for Windows host
types). Some hosts might need to be rebooted to detect the new fabric speed.
-alias id_alias
(Optional) Specifies an alternate name that does not change the basic ID for the system, but does
influence the VDisk_UID of every vdiskhostmap, both existing and new. These objects are created for
a system whose ID matches the alias. Therefore, changing the system alias causes loss of host system
access, until each host rescans for volumes presented by the system.
-invemailinterval interval
(Optional) Specifies the interval at which inventory emails are sent to the designated email recipients.
The interval range is 0 to 15. The interval is measured in days. Setting the value to 0 turns off the
inventory email notification function.
-gmlinktolerance link_tolerance
(Optional) Specifies the length of time, in seconds, for which an inadequate intersystem link is
tolerated for a Global Mirror operation. The parameter accepts values from 10 to 400 seconds in steps
of 10 seconds. The default is 300 seconds. You can disable the link tolerance by entering a value of
zero (0) for this parameter.
-gmmaxhostdelay max_host_delay
(Optional) Specifies the maximum time delay, in milliseconds, at which the Global Mirror link
tolerance timer starts counting down. This threshold value determines the additional impact that
Global Mirror operations can add to the response times of the Global Mirror source volumes. You can
use this parameter to increase the threshold from the default value of 5 milliseconds.
-gminterdelaysimulation inter_system_delay_simulation
(Optional) Specifies the intersystem delay simulation, which simulates the Global Mirror round trip
delay between two systems, in milliseconds. The default is 0; the valid range is 0 to 100 milliseconds.
118 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-gmintradelaysimulation intra_system_delay_simulation
(Optional) Specifies the intrasystem delay simulation, which simulates the Global Mirror round trip
delay in milliseconds. The default is 0; the valid range is 0 to 100 milliseconds.
-icatip icat_console_ipv4_address
(Optional) Specifies the system's new IPv4 address by the system.
-ntpip ipv4_ntp_ip_address
(Optional) Specifies the IPv4 address for the Network Time Protocol (NTP) server. Configuring an
NTP server address causes the system to use that NTP server as its time source. To stop using the
NTP server as a time source, specify the -ntpip parameter with a zero address, as follows:
chsystem -ntpip 0.0.0.0
-ntpip_6 ipv6_ntp_ip_address
Note: Before you specify -ntpip_6, an IPv6 prefix and gateway must be set for the system.
(Optional) Specifies the IPv6 address for the NTP server. Configuring an NTP server address causes
the system to immediately start using that NTP server as its time source. To stop using the NTP
server as a time source, invoke the -ntpip_6 parameter with a zero address, as follows:
chsystem -ntpip_6 0::0
-isnsip sns_server_address
(Optional) Specifies the IPv4 address for the iSCSI storage name service (SNS). To stop using the
configured IPv4 iSCSI SNS server, specify the -isnsip parameter with a zero address, as follows:
chsystem -isnsip 0.0.0.0
-isnsip_6 ipv6_sns_server_address
(Optional) Specifies the IPv6 address for the iSCSI SNS. To stop using the configured IPv6 iSCSI SNS
server, specify the -isnsip_6 parameter with a zero address, as follows:
chsystem -isnsip_6 0::0
-relationshipbandwidthlimit bandwidth_in_mbps
(Optional) Specifies the new background copy bandwidth in megabytes per second (MBps), from 1 -
1000. The default is 25 MBps. This parameter operates system-wide and defines the maximum
background copy bandwidth that any relationship can adopt. The existing background copy
bandwidth settings defined on a partnership continue to operate, with the lower of the partnership
and volume rates attempted.
Note: Do not set this value higher than the default without establishing that the higher bandwidth
can be sustained.
-infocenterurl url
Specifies the preferred infocenter URL to override the one used by the GUI. Because this information
is interpreted by the Internet browser, the specified information might contain a hostname or an IP
address.
Remember: View the currently-configured URL in the GUI preferences panel. This panel can also
help reset this value to the default setting.
-iscsiauthmethod none | chap
(Optional) Sets the authentication method for the iSCSI communications of the system. The
iscsiauthmethod value can be none or chap.
-chapsecret chap_secret
(Optional) Sets the Challenge Handshake Authentication Protocol (CHAP) secret to be used to
authenticate the system using iSCSI. This parameter is required if the iscsiauthmethod chap
parameter is specified. The specified CHAP secret cannot begin or end with a space.
Note: If you specify -layer you must specify either replication or storage. This option can be used if no
other systems are visible on the fabric, and no system partnerships are defined.
-cacheprefetch on | off
(Optional) Indicates whether cache prefetching is enabled or disabled across the system. Adjust this
only when following direction from the IBM Support Center.
-regensslcert
Regenerates the SSL certificates. Use this option if the SSL certificate expires.
Description
This command modifies specific features of a system. Multiple features can be changed by issuing a
single command.
Using the -ntpip or -ntpip_6 parameter allows the system to use an NTP server as an outside time
source. The system adjusts the system clock of the configuration node according to time values from the
NTP server. The clocks of the other nodes are updated from the configuration node clock. In the NTP
mode, the setsystemtime command is disabled.
All command parameters are optional; however, you must specify at least one parameter.
Use the chsystemip command to modify the system IP address and service IP address.
An invocation example
chsystem -ntpip 9.20.165.16
120 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
chsystemip
Use the chsystemip command to modify the Internet Protocol (IP) configuration parameters for the
clustered system.
Syntax
chsystemip -clusterip ipv4addr
-gw ipv4addr
-mask subnet_mask -noip -clusterip_6 ipv6addr
-gw_6 ipv6addr -prefix_6 prefix -noip_6
-port system_port
Parameters
-clusterip ipv4addr
(Optional) Changes the IPv4 system IP address. When you specify a new IP address for a system, the
existing communication with the system is broken.
-gw ipv4addr
(Optional) Changes the IPv4 default gateway IP address of the system.
-mask subnet_mask
(Optional) Changes the IPv4 subnet mask of the system.
-noip
(Optional) Unconfigures the IPv4 stack on the specified port, or both ports if none is specified.
Note: This parameter does not affect node service address configurations.
-clusterip_6 ipv6addr
(Optional) Sets the IPv6 system address for the port.
-gw_6 ipv6addr
(Optional) Sets the IPv6 default gateway address for the port.
-prefix_6 prefix
(Optional) Sets the IPv6 prefix.
-noip_6
(Optional) Unconfigures the IPv6 stack on the specified port, or both ports if none is specified.
Note: This parameter does not affect node service address configurations.
-port system_port
Specifies which port (1 or 2) to apply changes to. This parameter is required unless the noip or
noip_6 parameter is used.
Description
This command modifies IP configuration parameters for the system. The first time you configure a
second port, all IP information is required. Port 1 on the system must always have one stack fully
configured.
If the system IP address is changed, the open command-line shell closes during the processing of the
command. You must reconnect to the new IP address if connected through that port.
If there is no port 2 available on any of the system nodes, the chsystemip command fails.
The noip and noip_6 parameters can be specified together only if the port is also specified. The noip and
noip_6 parameters cannot be specified with any parameters other than port.
Note: The noip and noip_6 parameters do not affect node service address configurations.
Port 1 must have an IPv4 or IPv6 system address. The configuration of port 2 is optional.
Service IP addresses for all ports and stacks are initialized to Dynamic Host Configuration Protocol
(DHCP). A service IP address is always configured.
Note: If the console_ip is the same as IP address system port 1, Internet Protocol Version 4 (IPv4)
followed by IPv6, change the console_ip when the system IP is changed. If the console_ip differs from the
system port 1 IP address, do not change the console_ip when the system IP is changed.
Modifying an IP address: List the IP address of the system by issuing the lssystem command. Modify
the IP address by issuing the chsystemip command. You can either specify a static IP address or have the
system assign a dynamic IP address.
An invocation example
chsystemip -clusterip 9.20.136.5 -gw 9.20.136.1 -mask 255.255.255.0 -port 1
chiogrp
Use the chiogrp command to modify the name of an I/O group, or the amount of memory that is
available for Copy Services or VDisk (volume) mirroring operations.
Syntax
chiogrp
-name new_name
122 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-maintenance yes|no io_group_id
io_group_name
Parameters
-name new_name
(Optional) Specifies the name to assign to the I/O group. The -name parameter cannot be specified
with the -feature, -size, or -kb parameters.
-feature flash | remote | mirror | raid
(Optional) Specifies the feature to modify the amount of memory for: Copy Services or volume
mirroring. You must specify this parameter with the -size parameter. You cannot specify this
parameter with the -name parameter.
Note: Specifying remote changes the amount of memory that is available for Metro Mirror or Global
Mirror processing. Any volume that is in a Metro Mirror or Global Mirror relationship uses memory
in its I/O group, including master and auxiliary volumes, and volumes that are in inter-clustered
system (inter-system) or intra-system relationships.
-size memory_size
(Optional) Specifies the amount of memory that is available for the specified Copy Services or
volume mirroring function. Valid input is 0 or any integer. The default unit of measurement for this
parameter is megabytes (MB); you can use the kilobytes -kb parameter to override the default. You
must specify this parameter with the -feature parameter. You cannot specify this parameter with the
-name parameter.
-kb
(Optional) Changes the units for the -size parameter from megabytes (MB) to kilobytes (KB). If you
specify this parameter, the -size memory_size value must be any number divisible by 4. You must
specify this parameter with the -feature and -size parameters. You cannot specify this parameter with
the -name parameter.
io_group_id | io_group_name
(Required) Specifies the I/O group to modify. You can modify an I/O group by using the -name or
the -feature parameter.
-maintenance yes|no
(Optional) Specifies whether the I/O group should be in maintenance mode. The I/O group should
be placed in maintenance mode while carrying out service procedures on storage enclosures. Once
you enter maintenance mode, it continues until either:
v It is explicitly cleared, OR
v 30 minutes elapse
Note: Changing the maintenance mode on any I/O group changes the maintenance mode on all I/O
groups.
Description
The chiogrp command modifies the name of an I/O group or the amount of memory that is available for
Copy Services or volume mirroring. You can assign a name to an I/O group or change the name of a
specified I/O group. You can change the amount of memory that is available for Copy Services or
volume mirroring operations by specifying the -feature flash | remote | mirror parameter, and a
memory size. For volume mirroring and Copy Services (Flash Copy, Metro Mirror, and Global Mirror),
memory is traded against memory that is available to the cache. The amount of memory can be
decreased or increased. Consider the following memory sizes when you use this command:
v The default memory size for FlashCopy is 20 MB.
v The default memory size for Metro Mirror and Global Mirror is 20 MB.
Table 18 demonstrates the amount of memory required for volume mirroring and Copy Services. Each 1
MB of memory provides the following volume capacities and grain sizes:
Table 18. Memory required for volume Mirroring and Copy Services
1 MB of memory provides the
following volume capacity for the
Feature Grain size specified I/O group
Metro Mirror and Global Mirror 256 KB 2 TB of total Metro Mirror and
Global Mirror volume capacity
Flash Copy 256 KB 2 TB of total FlashCopy source
volume capacity
Flash Copy 64 KB 512 GB of total FlashCopy source
volume capacity
Incremental Flash Copy 256 KB 1 TB of total Incremental FlashCopy
source volume capacity
Incremental Flash Copy 64 KB 256 GB of total Incremental
FlashCopy source volume capacity
Volume mirroring 256 KB 2 TB of mirrored volumes
Table 19 provides an example of RAID level comparisons with their bitmap memory cost, where MS is
the size of the member drives and MC is the number of member drives.
Table 19. RAID level comparisons
Approximate bitmap memory
Level Member count Approximate capacity Redundancy cost
RAID-0 1-8 MC * MS None (1 MB per 2 TB of MS) * MC
RAID-1 2 MS 1 (1 MB per 2 TB of MS) *
(MC/2)
RAID-5 3-16 (MC-1) * MS 1 1 MB per 2 TB of MS with a
strip size of 256 KB; double
RAID-6 5-16 less than (MC-2 * MS) 2
with strip size of 128 KB.
RAID-10 2-16 (evens) MC/2 * MS 1 (1 MB per 2 TB of MS) *
(MC/2)
Note: There is a margin of error on the approximate bitmap memory cost of approximately 15%. For example, the
cost for a 256 KB RAID-5 is ~1.15 MB for the first 2 TB of MS.
For multiple Flash Copy targets, you must consider the number of mappings. For example, for a
mapping with a 256 KB grain size, 8 KB of memory allows one mapping between a 16 GB source volume
and a 16 GB target volume. Alternatively, for a mapping with a 256 KB grain size, 8 KB of memory
allows two mappings between one 8 GB source volume and two 8 GB target volumes.
When you create a Flash Copy mapping, if you specify an I/O group other than the I/O group of the
source volume, the memory accounting goes towards the specified I/O group, not towards the I/O group
of the source volume.
124 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
An invocation example
chiogrp -name testiogrpone io_grp0
An invocation example for changing the amount of Flash Copy memory in io_grp0 to 30 MB
chiogrp -feature flash -size 30 io_grp0
Syntax
chnode | chnodecanister
-iscsialias alias -failover
-noiscsialias
object_id
-name new_node_or_nodecanister_name object_name
Parameters
-iscsialias alias
(Optional) Specifies the iSCSI name of the node or node canister. The maximum length is 79
characters.
-noiscsialias
(Optional) Clears any previously set iSCSI name for this node or node canister. This parameter cannot
be specified with the iscsialias parameter.
-failover
(Optional) Specifies that the name or iSCSI alias being set is the name or alias of the partner node or
node canister in the I/O group. When there is no partner node or node canister, the values set are
applied to the partner node or node canister when it is added to the clustered system (system). If this
parameter is used when there is a partner node or node canister, the name or alias of that node or
node canister changes.
-name new_node_or_nodecanister_name
(Optional) Specifies the name to assign to the node or node canister.
Note: Node or node canister names supplied with -name on chnode / chnodecanister commands
must not be in use already as node or node canister names or as node or node canister failover
names.
Important: The iSCSI Qualified Name (IQN) for each node or node canister is generated using the
clustered system and node or node canister names. If you are using the iSCSI protocol, changing
either name also changes the IQN of all of the nodes or node canisters in the clustered system and
might require reconfiguration of all iSCSI-attached hosts.
Description
If the failover parameter is not specified, this command changes the name or iSCSI alias of the node or
node canister. The name can then be used to identify the node or node canister in subsequent commands.
The failover parameter is used to specify values that are normally applied to the partner node or node
canister in the I/O group. When the partner node or node canister is offline, the iSCSI alias and IQN are
assigned to the remaining node or node canister in the I/O Group. The iSCSI host data access is then
preserved. If the partner node or node canister is offline when these parameters are set, the node or node
canister they are set on handles iSCSI I/O requests to the iSCSI alias specified, or the IQN that is created
using the node or node canister name. If the partner node or node canister in the I/O group is online
when these parameters are set, the partner node or node canister handles iSCSI requests to the iSCSI alias
specified, and its node or node canister name and IQN change.
Note: When using VMware ESX, delete the static paths (in the iSCSI initiator properties) that contain the
old target IQN.
This ensures that the node canister name change does not impact iSCSI I/O during events such as a
target failover. For more information on this topic, visit .
126 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Syntax
chnodehw | chnodecanisterhw
-legacy version -force object_id
object_name
Parameters
-legacy version
(Optional) Sets the hardware configuration to make it compatible with the 6.3.0.0 code level. The
format is four decimal numbers separated by periods, and there can be up to sixteen characters.
-force
(Optional) Allow the node to restart and change its hardware configuration even if this will cause
volumes to go offline.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
object_id | object_name
(Optional) Specifies the object name or ID.
Description
This command automatically reboots the node or node canister if the node or node canister hardware is
different than its configured hardware. After rebooting, the node or node canister begins to use its
hardware, and does not use the previous configuration.
Use the -legacy parameter if you want to establish a partnership with another clustered system that is
running an earlier level of code than the local system. The value supplied for the -legacy parameter must
be the code level of the other clustered system.
An invocation example of how to update the node hardware configuration for the
node named node7 (even if the reboot of the node causes an I/O outage)
chnodehw -force node7
Syntax
cleardumps -prefix directory_or_file_filter
node_id
node_name
Parameters
-prefix directory_or_file_filter
(Required) Specifies the directory, files, or both to be cleared. If a directory is specified, with no file
filter, all relevant dump or log files in that directory are cleared. You can use the following directory
arguments (filters):
v /dumps (clears all files in all subdirectories)
v /dumps/cimom
v /dumps/configs
v /dumps/elogs
v /dumps/feature
v /dumps/iostats
v /dumps/iotrace
v /dumps/mdisk
v /home/admin/upgrade
In addition to the directory, you can specify a filter file. For example, if you specify
/dumps/elogs/*.txt, all files in the /dumps/elogs directory that end in .txt are cleared.
Note: The following rules apply to the use of wildcards with the SAN Volume Controller CLI:
v The wildcard character is an asterisk (*).
v The command can contain a maximum of one wildcard.
v With a wildcard, you must use double quotation marks (" ") around the filter entry, such as in the
following entry:
>cleardumps -prefix "/dumps/elogs/*.txt"
node_id | node_name
(Optional) Specifies the node to be cleared. The variable that follows the parameter is either:
v The node name, that is, the label that you assigned when you added the node to the clustered
system (system)
v The node ID that is assigned to the node (not the worldwide node name).
Description
This command deletes all the files that match the directory/file_filter argument on the specified node. If
no node is specified, the configuration node is cleared.
You can clear all the dumps directories by specifying /dumps as the directory variable.
You can clear all the files in a single directory by specifying one of the directory variables.
You can list the contents of these directories on the given node by using the lsxxxxdumps commands.
128 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
You can use this command to clear specific files in a given directory by specifying a directory or file
name. You can use the wildcard character as part of the file name.
Note: To preserve the configuration and trace files, any files that match the following wildcard patterns
are not cleared:
v *svc.config*
v *.trc
v *.trc.old
An invocation example
cleardumps -prefix /dumps/configs
cpdumps
Use the cpdumps command to copy dump files from a nonconfiguration node onto the configuration
node.
Note: In the rare event that the /dumps directory on the configuration node is full, the copy action ends
when the directory is full and provides no indicator of a failure. Therefore, clear the /dumps directory
after migrating data from the configuration node.
Syntax
cpdumps -prefix directory node_name
file_filter node_id
Parameters
-prefix directory | file_filter
(Required) Specifies the directory, or files, or both to be retrieved. If a directory is specified with no
file filter, all relevant dump or log files in that directory are retrieved. You can use the following
directory arguments (filters):
v /dumps (retrieves all files in all subdirectories)
v /dumps/audit
v /dumps/cimom
v /dumps/configs
v /dumps/elogs
v /dumps/feature
v /dumps/iostats
v /dumps/iotrace
v /dumps/mdisk
v /home/admin/upgrade
v (Storwize V7000) /dumps/enclosure
In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all files in the /dumps/elogs directory that end in .txt are copied.
Note: The following rules apply to the use of wildcards with the CLI:
v The wildcard character is an asterisk (*).
Description
This command copies any dumps that match the directory or file criteria from the given node to the
current configuration node.
You can retrieve dumps that were saved to an old configuration node. During failover processing from
the old configuration node to another node, the dumps that were on the old configuration node are not
automatically copied. Because access from the CLI is only provided to the configuration node, clustered
system files can only be copied from the configuration node. This command enables you to retrieve files
and place them on the configuration node so that you can then copy them.
You can view the contents of the directories by using the lsxxxxdumps commands.
An invocation example
cpdumps -prefix /dumps/configs nodeone
detectmdisk
Use the detectmdisk command to manually rescan the Fibre Channel network for any new managed
disks (MDisks) that might have been added, and to rebalance MDisk access across all available controller
device ports.
Syntax
detectmdisk
Description
This command causes the clustered system (system) to rescan the Fibre Channel network. The rescan
discovers any new MDisks that have been added to the system and rebalances MDisk access across the
available controller device ports. This command also detects any loss of controller port availability, and
updates the SAN Volume Controller configuration to reflect any changes.
Note: Although it might appear that the detectmdisk command has completed, some extra time might
be required for it to run. The detectmdisk is asynchronous and returns a prompt while the command
continues to run in the background. You can use the lsdiscoverystatus command to list the discovery
status.
130 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
In general, the system automatically detects disks when they appear on the network. However, some
Fibre Channel controllers do not send the required SCSI primitives that are necessary to automatically
discover the new disks.
If you have attached new storage and the system has not detected it, you might need to run this
command before the system detects the new disks.
When back-end controllers are added to the Fibre Channel SAN and are included in the same switch
zone as a system, the system automatically discovers the back-end controller and determines what
storage is presented to it. The SCSI LUs that are presented by the back-end controller are displayed as
unmanaged MDisks. However, if the configuration of the back-end controller is modified after this has
occurred, the system might be unaware of these configuration changes. Run this command to rescan the
Fibre Channel network and update the list of unmanaged MDisks.
Note: The automatic discovery that is performed by the system does not write to an unmanaged MDisk.
Only when you add an MDisk to a storage pool, or use an MDisk to create an image mode virtual disk,
is the storage actually used.
To identify the available MDisks, issue the detectmdisk command to scan the Fibre Channel network for
any MDisks. When the detection is complete, issue the lsmdiskcandidate command to show the
unmanaged MDisks; these MDisks have not been assigned to a storage pool. Alternatively, you can issue
the lsmdisk command to view all of the MDisks.
If disk controller ports have been removed as part of a reconfiguration, the SAN Volume Controller
detects this change and reports the following error because it cannot distinguish an intentional
reconfiguration from a port failure:
1630 Number of device logins reduced
If the error persists and redundancy has been compromised, the following more serious error is reported:
1627 Insufficient redundancy in disk controller connectivity
You must issue the detectmdisk command to force SAN Volume Controller to update its configuration
and accept the changes to the controller ports.
Note: Only issue the detectmdisk command when all of the disk controller ports are working and
correctly configured in the controller and the SAN zoning. Failure to do this could result in errors not
being reported.
An invocation example
detectmdisk
ping
Use the ping command to diagnose IP configuration problems by checking whether the specified IP
address is accessible from the configuration node.
Syntax
ping ipv4_address
ipv6_address
Description
This command checks whether the specified IP address is accessible from the configuration node.
Note: You can only use this command on ports 1 and 2 (for management traffic).
The ping takes place only from the configuration node. It can be useful for diagnosing problems where
the configuration node cannot be reached from a specific management server.
An invocation example
ping 9.20.136.11
Syntax
rmnode | rmnodecanister object_id
-force object_name
Parameters
-force
(Optional) Overrides the checks that this command runs. The parameter overrides the following two
checks:
v If the command results in volumes going offline, the command fails unless the force parameter is
used.
v If the command results in a loss of data because there is unwritten data in the write cache that is
contained only within the node or node canister to be removed, the command fails unless the
force parameter is used.
If you use the force parameter as a result of an error about volumes going offline, you force the node
or node canister removal and run the risk of losing data from the write cache. The force parameter
should always be used with caution.
object_id | object_name
(Required) Specifies the object name or ID that you want to modify. The variable that follows the
parameter is either:
v The object name that you assigned when you added the node to the clustered system
v The object ID that is assigned to the node (not the worldwide node name)
132 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Description
This command removes a node or node canister from the clustered system. This makes the node or node
canister a candidate to be added back into this clustered system or into another system. After the node or
node canister is deleted, the other node in the I/O group enters write-through mode until another node
or node canister is added back into the I/O group.
By default, the rmnode / rmnodecanister command flushes the cache on the specified node before the
node or node canister is taken offline. In some circumstances, such as when the system is already
degraded (for example, when both nodes in the I/O group are online and the virtual disks within the
I/O group are degraded), the system ensures that data loss does not occur as a result of deleting the only
node or node canister with the cache data.
The cache is flushed before the node or node canister is deleted to prevent data loss if a failure occurs on
the other node or node canister in the I/O group.
To take the specified node or node canister offline immediately without flushing the cache or ensuring
data loss does not occur, run the rmnode / rmnodecanister command with the -force parameter.
Prerequisites:
Before you issue the rmnode / rmnodecanister command, perform the following tasks and read the
following Attention notices to avoid losing access to data:
1. Determine which virtual disks (VDisks, or volumes) are still assigned to this I/O group by issuing the
following command. The command requests a filtered view of the volumes, where the filter attribute
is the I/O group.
lsvdisk -filtervalue IO_group_name=name
Note: Any volumes that are assigned to the I/O group that this node or node canister belongs to are
assigned to the other node or node canister in the I/O group; the preferred node or node canister is
changed. You cannot change this setting back.
2. Determine the hosts that the volumes are mapped to by issuing the lsvdiskhostmap command.
3. Determine if any of the volumes that are assigned to this I/O group contain data that you need to
access:
v If you do not want to maintain access to these volumes, go to step 5.
v If you do want to maintain access to some or all of the volumes, back up the data or migrate the
data to a different (online) I/O group.
4. Determine if you need to turn the power off to the node or node canister:
v If this is the last node or node canister in the clustered system, you do not need to turn the power
off to the node or node canister. Go to step 5.
v If this is not the last node or node canister in the cluster, turn the power off to the node or node
canister that you intend to remove. This step ensures that the Subsystem Device Driver (SDD) does
not rediscover the paths that are manually removed before you issue the delete node or node
canister request.
5. Update the SDD configuration for each virtual path (vpath) that is presented by the volumes that you
intend to remove. Updating the SDD configuration removes the vpaths from the volumes. Failure to
update the configuration can result in data corruption. See the Multipath Subsystem Device Driver:
User's Guide for details about how to dynamically reconfigure SDD for the given host operating
system.
Attention:
1. Removing the last node in the cluster destroys the clustered system. Before you delete the last node or
node canister in the clustered system, ensure that you want to destroy the clustered system.
2. If you are removing a single node or node canister and the remaining node or node canister in the
I/O group is online, the data can be exposed to a single point of failure if the remaining node or node
canister fails.
3. This command might take some time to complete since the cache in the I/O group for that node or
node canister is flushed before the node or node canister is removed. If the -force parameter is used,
the cache is not flushed and the command completes more quickly. However, if the deleted node or
node canister is the last node or node canister in the I/O group, using the -force option results in the
write cache for that node or node canister being discarded rather than flushed, and data loss can
occur. The -force option should be used with caution.
4. If both nodes or node canisters in the I/O group are online and the volumes are already degraded
before deleting the node or node canister, redundancy to the volumes is already degraded and loss of
access to data and loss of data might occur if the -force option is used.
Notes:
1. If you are removing the configuration node or node canister, the rmnode / rmnodecanister command
causes the configuration node or node canister to move to a different node or node canister within the
clustered system. This process might take a short time: typically less than a minute. The clustered
system IP address remains unchanged, but any SSH client attached to the configuration node or node
canister might need to reestablish a connection. The management GUI reattaches to the new
configuration node or node canister transparently.
2. If this is the last node or node canister in the clustered system or if it is currently assigned as the
configuration node, all connections to the system are lost. The user interface and any open CLI
sessions are lost if the last node or node canister in the clustered system is deleted. A time-out might
occur if a command cannot be completed before the node or node canister is deleted.
rmportip
Use the rmportip command to remove an Internet Small Computer System Interface (iSCSI) IP address
from a node Ethernet port.
Syntax
rmportip -node node_name port_id
-failover -ip_6 node_id
134 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Parameters
-failover
(Optional) Specifies that the failover IP address information be removed for the specified port.
-ip_6
(Optional) Specifies that the IPv6 address be removed for the specified port. If this parameter is not
used, the IPv4 address is removed by default.
-node node_name | node_id
(Required) Specifies the node with the ethernet port that the IP address is being removed from.
port_id
(Required) Specifies which port (1, 2, 3, or 4) to apply changes to.
Description
This command removes an IPv4 or IPv6 address from an ethernet port of a node.
setclustertime
Attention: The setclustertime command has been discontinued. Use the setsystemtime command
instead.
setsystemtime
Use the setsystemtime command to set the time for the clustered system (system).
Syntax
setsystemtime -time time_value
Parameters
-time time_value
(Required) Specifies the time to which the system must be set. This must be in the following format
(where 'M' is month, 'D' is day, 'H' is hour, 'm' is minute, and 'Y' is year):
MMDDHHmmYYYY
Description
An invocation example
Chapter 8. Clustered system commands 135
setsystemtime -time 040509142003
setpwdreset
Use the setpwdreset command to view and change the status of the password-reset feature for the
display panel.
Syntax
setpwdreset -disable
-enable
-show
Parameters
-disable
Disables the password-reset feature that is available through the front panel menu system.
-enable
Enables the password-reset feature that is available through the front panel menu system.
-show
Displays the status of the password-reset feature, which is either enabled or disabled.
Description
The system provides an option to reset the system superuser password to the default value.
For SAN Volume Controller systems this can be done using the front panel menu system.
For all systems this can be done using the USB stick. For more information, visit Using the initialization
tool.
This command allows access if the system superuser password is forgotten. If this feature remains
enabled, make sure there is adequate physical security to the system hardware.
An invocation example
setpwdreset -show
This output means that the password or reset feature that is available through the front panel menu
system is enabled. If the password status is [0], this feature is disabled.
settimezone
Use the settimezone command to set the time zone for the cluster.
136 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Syntax
settimezone -timezone timezone_arg
Parameters
-timezone timezone_arg
Specifies the time zone to set for the cluster.
Description
This command sets the time zone for the cluster. Use the -timezone parameter to specify the numeric ID
of the time zone that you want to set. Issue the lstimezones command to list the time-zones that are
available on the cluster. A list of valid time-zones settings are displayed in a list.
The time zone that this command sets will be used when formatting the error log that is produced by
issuing the following command:
dumperrlog
Note: If you have changed the timezone, you must clear the error log dump directory before you can
view the error log through the web application.
Issue the showtimezone command to display the current time-zone settings for the cluster. The cluster ID
and its associated time-zone are displayed. Issue the setsystemtime command to set the time for the
cluster.
An invocation example
settimezone -timezone 5
startstats
Use the startstats command to modify the interval at which per-node statistics for virtual disks
(VDisks), managed disks (MDisks), and nodes are collected.
Syntax
startstats -interval time_in_minutes
Parameters
-interval time_in_minutes
Specifies the time in minutes. This is the time interval between the gathering of statistics, from 1 to 60
minutes in increments of 1 minute.
Description
Running the startstats command will reset the statistics timer to zero (0), and give it a new interval at
which to sample. Statistics are collected at the end of each sampling period as specified by the -interval
parameter. These statistics are written to a file, with a new file created at the end of each sampling
period. Separate files are created for MDisks, VDisks and node statistics.
A maximum of 16 files are stored in the directory at any one time for each statistics file type, for
example:
Nm_stats_nodepanelname_date_time
Nv_stats_nodepanelname_date_time
Nn_stats_nodepanelname_date_time
Statistics files are created for all time intervals. Before the 17th file for each type is created, the oldest file
of that type is deleted.
stats_type_stats_nodepanelname_date_time
Where stats_type is Nm for MDisks, Nv for VDisks, and Nn for node statistics. nodepanelname is the
current configuration node panel name, date is in the format of yymmdd, and time is in the format of
hhmmss.
Statistics are collected for each MDisk and recorded in the Nm_stats_nodepanelname_date_time file,
including the following statistical information:
v The number of SCSI read and write commands that are processed during the sample period
v The number of blocks of data that are read and written during the sample period
v Per MDisk, cumulative read and write external response times in milliseconds
v Per MDisk, cumulative read and write queued response times
Statistics are collected for each VDisk and recorded in the Nv_stats_nodepanelname_date_time file,
including the following statistical information:
v The total number of processed SCSI read and write commands
v The total amount of read and written data
v Cumulative read and write response time in milliseconds
v Statistical information about the read/write cache usage
v Global Mirror statistics including latency
Statistics are collected for the node from which the statistics file originated and recorded in the
Nn_stats_nodepanelname_date_time file, including the following statistical information:
v Usage figure for the node from which the statistic file was obtained
v The amount of data transferred to and received from each port on the node to other devices on the
SAN
v Statistical information about communication to other nodes on the fabric
An invocation example
138 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
startstats -interval 25
stopstats (Deprecated)
The stopstats command has been deprecated. You can no longer disable statistics collection.
stopcluster
The stopcluster command has been discontinued. Use the stopsystem command instead.
stopsystem
Use the stopsystem command to shut down a single node or the entire clustered system in a controlled
manner. When you issue this command, you are prompted with a confirmation of intent to process the
command.
Syntax
stopsystem
-force -node node_name
node_id
Parameters
-force
(Optional) Specifies that the node that is being shut down is the last online node in a given I/O
group. The force parameter also overrides the checks that this command runs. The parameter
overrides the following two checks:
v If the command results in volumes going offline, the command fails unless the force parameter is
used.
v If the node being shut down is the last online node in the I/O group, the command fails unless the
force parameter is used.
If you use the force parameter as a result of an error about volumes going offline, you force the node
to shut down, even if it is the last online node in the I/O group. The force parameter should always
be used with caution.
-node node_name | node_id
(Optional) Specifies the node that you want to shut down. You can specify one of the following
values:
v The node name, or label that you assigned when you added the node to the system.
v The node ID that is assigned to the node (not the worldwide node name).
If you specify -node node_name | node_id, only the specified node is shut down; otherwise, the entire
system is shut down.
Description
If you enter this command with no parameters, the entire system is shut down. All data is flushed to disk
before the power is removed.
Entering y or Y to the confirmation message processes the command. No feedback is then displayed.
Entering anything other than y or Y results in the command not processing. No feedback is displayed.
If you need to shut down the entire system or a single node, use this command instead of using the
power button on the nodes or powering off the main power supplies to the system.
Attention: Do not power off the uninterruptible power supply or remove the power cable from the
node.
Storwize V7000: If you need to shut down the system or a single node, use this command instead of
using the power button on power supplies, or powering off the mains to the system.
Using this command to shut down a single node fails if shutting down the node makes any volumes
inaccessible, or if it is the last node in an I/O group. If you still need to shut down the node, you can use
the -force option to override these checks.
An invocation example
stopsystem
140 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 9. Clustered system diagnostic and service-aid
commands
Clustered system diagnostic and service-aid commands are designed to diagnose and find clustered
system problems.
The SAN Volume Controller enables you to perform service activity, such as problem determination and
repair activities, with a limited set of command-line tools. When you are logged in under the
administrator role, all command-line activities are permitted. When you are logged in under the service
role, only those commands that are required for service are enabled. The clustered system diagnostic and
service-aid commands apply under the service role.
applysoftware
Use the applysoftware command to upgrade the clustered system (system) to a new level of system code
(code).
Syntax
applysoftware -file filename_arg
-force -file filename_arg -prepare
-abort
Parameters
-force
(Optional) Specifies that the upgrade or abort should proceed even if there is a lack of redundancy in
the system. Disabling redundancy checking might cause loss of data, or loss of access to data. Use the
force parameter with the abort parameter if one or more nodes are offline.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
-file filename_arg
(Required for performing an upgrade) Specifies the filename of the installation upgrade package.
Copy the upgrade package onto the configuration node before running the applysoftware command.
Note: The file parameter cannot be used with the abort parameter.
-prepare
(Optional) Prepares the system for a manual code level upgrade.
Note: The abort parameter can be used with the force parameter, but not the file or prepare
parameters.
Description
This command starts the upgrade process of the system to a new level of SAN Volume Controller code.
The applysoftware command applies a level of code to the node as a service action (Paced Upgrade) to
upgrade the specific node, or as an automatic upgrade process that upgrades all of the nodes on a
system.
The applysoftware command cannot be used in service state, which means the system must be running
in order for the command to be used and be successful. This command is synchronous and therefore
reports success or failure.
The code package as specified by the file name must first be copied onto the current configuration node
in the /home/admin/upgrade directory; use the PuTTy secure copy (scp) application to copy the file.
If the applysoftware command is successful, the lssoftwareupgradestatus command reports the status is
prepared. If the applysoftware command fails, the lssoftwareupgradestatus command reports the status
the status will be reported is inactive.
If specified, the prepare parameter must succeed in order to successfully upgrade. It is recommended to
use the same package for the prepare as the actual upgrade. The prepare parameter can be canceled by
using the abort parameter (even after the system is prepared) as long as the lssoftwareupgradestatus
command reports the status as prepared.
Important: The -prepare might time out. If this occurs, the prepare causes an asynchronous condition,
and the lssoftwareupgradestatus command reports the prepare as "preparing". If this occurs then wait
until lssoftwareupgradestatus reports the upgrade as "prepared" before proceeding with the manual
upgrade process.
The command completes as soon as the upgrade process is successful. The command fails and the
upgrade package is deleted if:
v The given package fails an integrity check due to corruption.
v Any node in the system has a hardware type not supported by the new code.
v The new code level does not support upgrades from the currently installed code.
v The code level of a remote system is incompatible with the new code.
v There are any volumes that are dependent on the status of a node.
Note: The force parameter can be used to override this if you are prepared to lose access to data
during the upgrade. Before proceeding, use the lsdependentvdisks command with the node parameter
to list the node-dependent volumes at the time the command is run. If the command returns an error,
move the quorum disks to MDisks that are accessible through all nodes. Rerun the command until no
errors are returned.
The lsdumps command allows you to view the contents of the /home/admin/upgrade directory.
An invocation example
142 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
The resulting output
No feedback
An invocation example
No feedback
An invocation example
applysoftware -abort
No feedback
caterrlog (Deprecated)
The caterrlog command has been deprecated. Use the lseventlog command instead.
caterrlogbyseqnum (Deprecated)
The caterrlogbyseqnum command has been deprecated. Use the lseventlog command instead.
cherrstate
The cherrstate command has been discontinued. Use the cheventlog command instead.
clearerrlog
Use the clearerrlog command to clear all entries from the error log including status events and any
unfixed errors.
Syntax
clearerrlog
-force
Parameters
-force
(Optional) Specifies that the clearerrlog command be processed without confirmation requests. If the
-force parameter is not supplied, you are prompted to confirm that you want to clear the log.
Description
This command clears all entries from the error log. The entries are cleared even if there are unfixed errors
in the log. It also clears any status events that are in the log.
An invocation example
clearerrlog -force
dumperrlog
The dumperrlog command dumps the contents of the error log to a text file.
Syntax
dumperrlog
-prefix filename_prefix
Parameters
-prefix filename_prefix
(Optional) A file name is created from the prefix and a time stamp, and has the following format:
prefix_NNNNNN_YYMMDD_HHMMSS
Note: If the -prefix parameter is not supplied, the dump is directed to a file with a system-defined
prefix of errlog.
Description
When run with no parameters, this command dumps the clustered system (system) error log to a file
using a system-supplied prefix of errlog, which includes the node ID and time stamp. When a file name
prefix is provided, the same operation is performed but the details are stored in the dumps directory
within a file with a name that starts with the specified prefix.
A maximum of ten error-log dump files are kept on the system. When the 11th dump is made, the oldest
existing dump file is overwritten.
Error log dump files are written to /dumps/elogs. The contents of this directory can be viewed using the
lsdumps command.
Files are not deleted from other nodes until you issue the cleardumps command.
An invocation example
dumperrlog -prefix testerrorlog
finderr
Use the finderr command to analyze the error log for the highest severity unfixed error.
144 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Syntax
finderr
Description
The command scans the error log for any unfixed errors. Given a priority ordering within the code, the
highest priority unfixed error is returned to standard output.
You can use this command to determine the order in which to fix the logged errors.
An invocation example
finderr
lserrlogbyfcconsistgrp (Deprecated)
The lserrlogbyfcconsistgrp command has been deprecated. Use the lseventlog command instead.
lserrlogbyfcmap (Deprecated)
The lserrlogbyfcmap command has been deprecated. Use the lseventlog command instead.
lserrlogbyhost (Deprecated)
The lserrlogbyhost command has been deprecated. Use the lseventlog command instead.
lserrlogbyiogrp (Deprecated)
The lserrlogbyiogrp command has been deprecated. Use the lseventlog command instead.
lserrlogbymdisk (Deprecated)
The lserrlogbymdisk command has been deprecated. Use the lseventlog command instead.
lserrlogbymdiskgrp (Deprecated)
The lserrlogbymdiskgrp command has been deprecated. Use the lseventlog command instead.
lserrlogbynode (Deprecated)
The lserrlogbynode command has been deprecated. Use the lseventlog command instead.
lserrlogbyrcconsistgrp (Deprecated)
The lserrlogbyrcconsistgrp command has been deprecated. Use the lseventlog command instead.
lserrlogbyrcrelationship (Deprecated)
The lserrlogbyrcrelationship command has been deprecated. Use the lseventlog command instead.
lserrlogdumps (Deprecated)
Attention: The svcinfo lserrlogdumps command is deprecated. Use the svcinfo lsdumps command to
display a list of files in a particular dumps directory.
cheventlog
Use the cheventlog command to modify events in the event log.
Syntax
cheventlog -fix sequence_number
-checklogoff
Parameters
-fix sequence_number
(Optional) Mark an unfixed event as fixed.
-checklogoff
(Optional) Turns off check log light emitting diode (LED).
Description
lseventlog
Use the lseventlog command to display a concise view of the system event log, or a detailed view of one
entry from the log.
Syntax
lseventlog
-alert yes|no -message yes|no -monitoring yes|no
-expired yes|no -fixed yes|no -count entry_limit
-order date|severity sequence_number
Parameters
-alert
(Optional) Includes (or excludes) events with alert status.
146 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-message
(Optional) Includes events with message status.
-monitoringyes|no
(Optional) Includes events with monitoring status.
-expiredyes|no
(Optional) Includes (or excludes) events with expired status.
-fixedyes|no
(Optional) Includes (or excludes) events with fixed status.
-countentry_limit
(Optional) Indicates the maximum number of events to display.
-order date|severity
(Optional) Indicates what order the events should be in. Ordering by date displays the oldest events
first. Ordering by severity displays the events with the highest severity first. If multiple events have
the same severity, then they are ordered by date, with the oldest event being displayed first.
The following list shows the order of severity, starting with the most severe:
1. Unfixed alerts (sorted by error code; the lowest error code has the highest severity)
2. Unfixed messages
3. Monitoring events (sorted by error code; the lowest error code has the highest severity)
4. Expired events
5. Fixed alerts and messages
sequence_number
(Optional) Indicates if the command should display a full view of the event.
Description
This command displays a concise view of the system event log, or a detailed view of one entry from the
log. You can sort the events and entries by severity or age.
Table 20 provides the attribute values that can be displayed as output view data.
Table 20. lseventlog output
Attribute Description Value
machine_type Node machine type and model Alphanumeric string (up to 7
number characters long )
serial number Node serial number Alphanumeric string (up to 7
characters long )
sequence_number Sequence number of the event Numeric 0-8000000
first_timestamp When the event was added to the log YYMMDDHHMMSS
first_timestamp_epoch When the event was added to the log Numeric 32-bit
(in seconds) after the epoch occurs
148 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Table 20. lseventlog output (continued)
Attribute Description Value
root_sequence_number Sequence number of the root or Numeric, 1-8000000; blank if there is
causal event no root or if the event is not directly
Important: If the event is directly caused by another event
caused by another event then the
sequence_number of the related event
is shown here.
event_count Number of reported events that have Numeric 32-bit.
been combined into this event
status Event category v alert
v message
v monitoring
v expired
fixed Indicates whether the event was v yes
marked fixed (for an alert) or read
v no (for events that cannot be fixed,
(for a message)
or are not fixed)
auto_fixed Indicates if event is marked fixed by v yes
the code
v no (for events that cannot be fixed,
or are not fixed)
notification_type Type of event notification v error
v warning
v informational
v none
event_id Event ID 6-digit numeric
event_id_text Description associated with the event Text, max 200 bytes
ID
This appears in CLI requested
language.
error_code Error code associated with this event 4-digit numeric; blank if there is no
error code
error_code_text Description associated with the error Text (maximum of 200 bytes); blank if
code there is no error code
An invocation example
sequence_number:last_timestamp:object_type:object_id:object_name:copy_id:
status:fixed:event_id:error_code:description
400:100106132413:vdisk:2:my_vdisk:1:alert:no:060001:1865:
Space Efficient Virtual Disk Copy offline due to insufficient space
401:100106140000:cluster::ldcluster-2::message:no:981001:
:Cluster Fabric View updated by fabric discovery
sequence_number 120
first_timestamp 111130100419
first_timestamp_epoch 1322647459
last_timestamp 111130100419
last_timestamp_epoch 1322647459
object_type node
object_id 1
object_name node1
copy_id
reporting_node_id 1
reporting_node_name node1
root_sequence_number
event_count 1
status alert
fixed yes
auto_fixed no
notification_type error
event_id 073003
event_id_text More/Less fibre channel ports operational
error_code 1060
error_code_text Fibre Channel ports not operational
machine_type 21458F4
150 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
serial_number 75BZPMA
fru none
fixed_timestamp 111202141004
fixed_timestamp_epoch 1322835004
sense1 03 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense2 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense3 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense5 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense6 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
lssyslogserver
Use the lssyslogserver command to return a concise list or a detailed view of syslog servers that are
configured on the clustered system.
Syntax
lssyslogserver
-nohdr -delim delimiter syslog_server_name
syslog_server_id
Parameters
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. The -nohdr parameter suppresses the display of these
headings.
Description
Use this command to display a concise list or a detailed view of syslog servers that are configured on the
clustered system.
setlocale
Use the setlocale command to change the locale setting for the clustered system (system). It also
changes command output to the chosen language.
Syntax
setlocale -locale locale_id
Parameters
-locale locale_id
Specifies the locale ID. The value must be a numeric value depending on the desired language (as
indicated below)
Description
This command changes the language in which error messages are displayed as output from the
command-line interface. Subsequently, all error messages from the command-line tools are generated in
the chosen language. This command is run when you request a change of language (locale) and is
generally run from the web page. Issue the setlocale command to change the locale setting for the
system; all interface output is changed to the chosen language. For example, to change the language to
Japanese, type the following:
setlocale -locale 3
where 3 is the value for Japanese. The following values are supported:
v 0 US English (default)
v 1 Simplified Chinese
v 2 Traditional Chinese
v 3 Japanese
v 4 French
v 5 German
v 6 Italian
v 7 Spanish
v 8 Korean
v 9 Portuguese (Brazilian)
152 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Note: This command does not change the front panel display panel settings.
svqueryclock
Use the svqueryclock command to return the date, time, and current time-zone of the clustered system
(system).
Syntax
svqueryclock
Description
This command returns the date, time and current time-zone of the system.
An invocation example
svqueryclock
writesernum
Use the writesernum command to write the node serial number into the planar NVRAM.
Syntax
writesernum -sernum serial_number node_id
node_name
Parameters
-sernum serial_number
(Required) Specifies the serial number to write to the nonvolatile memory of the system planar.
node_id | node_name
(Required) Specifies the node where the system planar is located. The serial number is written to this
system planar. This name is not the worldwide node name (WWNN).
This command writes the node serial number into the planar NVRAM and then reboots the system. You
can find the serial number at the front of the node without having to remove it from the rack. The
seven-digit alphanumeric serial number is located on a label on the front of the node. The serial number
on the label might contain a hyphen. Omit this hyphen when typing the serial number with the
writesernum command.
Note: Once you have written the serial number to the planar NVRAM, you can issue the lsnodevpd
command to verify that the number is correct. The system_serial_number field contains the serial number.
An invocation example
writesernum -sernum 1300027 node1
154 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 10. Controller command
Use the controller command to modify the name of a storage controller.
chcontroller
Use the chcontroller command to modify the attributes of a controller.
Syntax
chcontroller
-name new_name -allowquorum yes
no
controller_id
controller_name
Parameters
-name new_name
(Optional) Specifies the new name to be assigned to the controller.
-allowquorum yes | no
(Optional) Specifies that the controller is allowed or is not allowed to support quorum disks. A value
of yes enables a suitable controller to support quorum disks. A value of no disables a controller from
supporting quorum disks, provided that the specified controller is not currently hosting a quorum
disk.
controller_id | controller_name
(Required) Specifies the controller to modify; use either the controller name or the controller ID.
Description
This command changes the name of the controller that is specified by the controller_id | controller_name
variable to the value that you specify with the -name parameter.
If any controller that is associated with an MDisk shows the allow_quorum attribute set to no with the
lscontroller command, the set quorum action fails for that MDisk. Before using the chcontroller
command to set the -allowquorum parameter to yes on any disk controller, check the following website to
see whether the controller supports quorum.
www.ibm.com/storage/support/2145
You can add a new disk controller system to your SAN at any time. Follow the switch zoning guidelines
in the section about switch zoning. Also, ensure that the controller is set up correctly for use with the
clustered system (system).
To add a new disk controller system to a running configuration, ensure that the system has detected the
new storage MDisks by issuing the detectmdisk command. The controller has automatically been
assigned a default name. If you are unsure of which controller is presenting the MDisks, issue the
lscontroller command to list the controllers. The new controller is listed with the highest numbered
default name. Record the controller name and follow the instructions in the section about determining a
disk controller system name.
These MDisks correspond to the RAID arrays or partitions that you have created. Record the field
controller LUN number. The field controller LUN number corresponds with the LUN number that you
assigned to each of the arrays or partitions.
Create a new managed disk group and add only the RAID arrays that belong to the new controller to
this storage pool. Avoid mixing RAID types; for each set of RAID array types (for example, RAID-5 or
RAID-1), create a new storage pool. Assign this storage pool an appropriate name; if your controller is
called FAST650-abc and the storage pool contains RAID-5 arrays, assign the MDisk a name similar to
F600-abc-R5. Issue the following command:
Note: This creates a new storage pool with an extent size of 16 MB.
An invocation example
chcontroller -name newtwo 2
156 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 11. Drive commands
Use the drive commands to capture information to assist with managing drives.
applydrivesoftware
Use the applydrivesoftware command to upgrade drives.
Syntax
firmware
applydrivesoftware -file name -type fpga -drive drive_id
-force
Parameters
-file name
(Required) Specifies the firmware upgrade file name that exists in the /home/admin/upgrade/
directory. This must be an alphanumeric string of up to 255 characters.
-type
(Required) Specifies the type of download. This can be either firmware or fpga.
Remember: Drives using firmware can be upgraded concurrently, but this does not affect the Field
Programmable Gate Array (FPGA).
-drive drive_id
(Required) Specifies the ID of the drive to be upgraded. This must be a numeric string.
-force
(Optional) Specifies that the upgrade should continue. This disables redundancy checking. In the
unlikely event that a software installation causes the drive to fail, disabling redundancy checking
might cause loss of data, or loss of access to data. If specified no check is performed for volumes that
are dependent on this drive. This parameter is recommended for non-redundant RAID configuration
drives, but is not recommended for redundant RAID configuration drives.
Description
This command upgrades drives. Additionally, the system applies updates to the drive if there is an
update available for that drive type. The system should stop if any problems occur.
Additionally, the system checks if any volumes are dependent on the drive, and the command fails if any
are dependent. This verification is required to install software on drives that are part of non-redundant
RAID configurations. Use the -force parameter to bypass this verification.
For non-redundant RAID configuration drives the -force parameter is not required. For example if the
only volumes on these drives are mirrored volumes, an attempt can be made without the -force
parameter. (The parameter will not work if there are dependent volumes.) If the parameter does not start
the download because there are dependent volumes, specify lsdependentvdisks -drive drive_id on the
drive ID being upgraded to find out which volumes are dependent on the drive. Look at the list of
An invocation example
No feedback
chdrive
Use the chdrive command to change the drive properties.
Syntax
chdrive -use drive_id
unused -allowdegraded
candidate
spare
failed
-task format
certify
recover
Parameters
-use
Describes the role of the drive:
v unused: the drive is not in use and will not be used as a spare
v candidate: the drive is available for use in an array
v spare: the drive can be used as a hot spare if required
v failed: the drive has failed.
Note: To create member drives, add the drives to arrays using the charray command.
-allowdegraded
(Optional) Permits permission for a change of drive to continue, even if a hotspare is not available.
-task
Causes the drive to perform a task:
v format: a drive is formatted for use in an array; only permitted when drive is a candidate or has
failed validation
v certify: the disk is analyzed to verify the integrity of the data it contains; permitted for any drive
that is a candidate, spare, or member
v recover: recover an offline SSD drive without losing data; permitted when the drive is offline
because a build is required, or when the drive has failed validation
Note: You can track the drive progress using the lsdriveprogress command.
drive_id
The identity of the drive.
158 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Description
lsdrive
Use the lsdrive command to display configuration information and drive VPD.
Syntax
lsdrive
-bytes drive_id
Parameters
-bytes
(Optional) The size (capacity) of the drive in bytes.
drive_id
(Optional) The identity of the drive.
Description
160 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
firmware_level:3.02
FPGA_level:1.99
mdisk_id:0
mdisk_name:mdisk0
member_id:0
enclosure_id:1
slot:2
node_id:
node_name:
quorum_id:
port_1_status:online
port_2_status:online
lsdrivelba
Use the lsdrivelba command to map array MDisk logical block address (LBA) to a set of drives.
Syntax
lsdrivelba
-delim delimiter -mdisklba lba
-mdisk mdisk_id | mdisk_name
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
-mdisklba lba
(Optional) The logical block address (LBA) on the MDisk. The LBA must be specified in hex, with a
0x prefix.
-mdiskmdisk_id | mdisk_name
(Optional) The ID or name of the MDisk.
Description
This command maps the array MDisk logical block address (LBA) to a set of drives.
This is an example of a five-member RAID-5 array with strip size of 256 KB:
An invocation example
lsdrivelba -delim : -mdisklba 0x000 -mdisk 2
lsdriveprogress
Use the lsdriveprogress command to view the progress of various drive tasks.
Syntax
lsdriveprogress
-delim delimiter drive_id
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
drive_id
(Optional) The drive for which you want to view progress.
Description
162 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
v format
v certify
v recover
progress
The percentage complete of the job.
estimated_completion_time
The estimated completion time (YYMMDDHHMMSS), where:
v 'Y' is year
v 'M' is month
v 'D' is day
v 'H' is hour
v 'S' is second
.
An invocation example
lsdriveprogress -delim :
An invocation example
lsdriveprogress -delim : 9
triggerdrivedump
Use the triggerdrivedump command to collect support data from a disk drive. This data can help to
understand problems with the drive, and does not contain any data that applications may have written to
the drive.
Syntax
triggerdrivedump drive_id
Parameters
drive_id
The ID of the drive to dump.
Description
Use this command to collect internal log data from a drive and store the information in a file in the
/dumps/drive directory. This directory is on one of the nodes connected to the drive.
An invocation example
triggerdrivedump 1
Note: The system chooses the node on which to run the statesave.
164 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 12. Email and event notification commands
You can use the command-line interface (CLI) to enable your system to send notifications.
chemail
Use the chemail command to set or modify contact information for email event notifications. At least one
of the parameters must be specified to modify settings.
Syntax
chemail
-reply reply_email_address -contact contact_name
-primary primary_telephone_number -alternate alternate_telephone_number
-location location -contact2 contact_name2
-primary2 primary_telephone_number2 -alternate2 alternate_telephone_number2
-nocontact2 -organization organization -address address
-city city -state state -zip zip -country country
Parameters
-reply reply_email_address
(Optional) Specifies the email address to which a reply is sent.
-contact contact_name
(Optional) Specifies the name of the person to receive the email.
For machine types 2071 and 2072 the maximum number of characters is 30. For other machine types
the maximum number of characters is 72.
-primary primary_telephone_number
(Optional) Specifies the primary contact telephone number.
Note: For machine types 2071 and 2072 (in the United States and Canada), the value entered must be
exactly ten decimal digits. For machines types 2071 and 2072 (in other countries) the value entered
can be five to nineteen decimal digits. Otherwise, there can be up to nineteen characters.
-alternate alternate_telephone_number
(Optional) Specifies the alternate contact telephone number that is used when you cannot reach the
primary contact on the primary phone.
-location location
(Optional) Specifies the physical location of the system that is reporting the error. The location value
must not contain punctuation or any other characters that are not alphanumeric or spaces.
Note: For machine types 2071 and 2072 (in the United States and Canada), the value entered must be
exactly ten decimal digits. For machines types 2071 and 2072 (in other countries) the value entered
can be five to nineteen decimal digits. Otherwise, there can be up to nineteen characters.
-alternate2 alternate_telephone_number2
(Optional) Specifies the alternate contact telephone number for the second contact person.
-nocontact2
(Optional) Removes all the contact details for the second contact person.
-organization organization
(Optional) Specifies the user's organization as it should appear in Call Home emails.
-address address
(Optional) Specifies the first line of the user's address as it should appear in Call Home email.
-city city
(Optional) Specifies the user's city as it should appear in Call Home email.
-state state
(Optional) Specifies the user's state as it should appear in Call Home email. This is a two-character
value such as NY for New York.
-zip zip
(Optional) Specifies the user's zip code or postal code as it should appear in Call Home email.
-country country
(Optional) Specifies the country in which the machine resides as it should appear in Call Home
email. This is a two-character value such as US for United States.
For machine types 2071 and 2072 this value cannot be US or CA if the value for primary or primary2
telephone number is not blank or exactly 10 digits.
Description
This command sets or modifies contact information that is used by the email event notification facility.
Note: If you are starting the email event notification facility, the reply, contact, primary, and location
parameters are required. If you are modifying contact information used by the email event notification
facility, at least one of the parameters must be specified.
These fields do not have to be set to start the email notification system, but if the new fields are set they
are included in the email event notifications.
An invocation example
166 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
chemail -reply [email protected]
-contact Didier Drogba
-primary 01962817668
-location C block
-organization UEFA
-address 1 Chelsea Blvd
-city Fulham
-zip 0U812
-machine_country GB
An invocation example
chemail -primary 0441234567 -location room 256 floor 1 IBM
An invocation example
chemail -country US -primary 8458765309
chemailserver
Use the chemailserver command to modify the parameters of an existing email server object.
Syntax
chemailserver
-name server_name -ip ip_address
email_server_name
-port port email_server_id
Parameters
-name server_name
(Optional) Specifies a unique name to assign to the email server object. The name must be a 1-
through 63-character string, and cannot start with a hyphen or number. When specifying a server
name, emailserver is a reserved word.
-ip ip_address
(Optional) Specifies the IP address of the email server object. This must be a valid IPv4 or IPv6
address. IPv6 addresses can be zero compressed.
-port port
(Optional) Specifies the port number for the email server. This must be a value of 0 - 65535. The
default value is 25.
email_server_name | email_server_id
(Required) Specifies the name or ID of the server object to be modified.
Use this command to change the settings of an existing email server object. The email server object
describes a remote Simple Mail Transfer Protocol (SMTP) email server.
You must specify either the current name or the ID of the object returned at creation time. Use the
lsemailserver command to obtain this ID.
An invocation example
chemailserver -name newserver 0
chemailuser
Use the chemailuser command to modify the settings that are defined for an email recipient.
Syntax
chemailuser
-address user_address -usertype support
local
on on on
-error off -warning off -info off
userid_or_name
-name user_name on
-inventory off
Parameters
-address user_address
(Optional) Specifies the email address of the person receiving the email or inventory notifications, or
both. The user_address value must be unique.
-usertype support | local
(Optional) Specifies the type of user, either local or support, based on the following definitions:
support
Address of the support organization that provides vendor support.
local All other addresses.
-error on | off
(Optional) Specifies whether the recipient receives error-type event notifications. Set to on, error-type
event notifications are sent to the email recipient. Set to off, error-type event notifications are not sent
to the recipient.
-warning on | off
(Optional) Specifies whether the recipient receives warning-type event notifications. Set to on,
warning-type event notifications are sent to the email recipient. Set to off, warning-type event
notifications are not sent to the recipient.
168 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-info on | off
(Optional) Specifies whether the recipient receives informational event notifications. Set to on,
informational event notifications are sent to the email recipient. Set to off, informational event
notifications are not sent to the recipient.
-name user_name
(Optional) Specifies the user name of the new email event notification recipient. The user_name value
must be unique, must not contain spaces, and must not contain all numbers. The name emailusern,
where n is a number, is reserved and cannot be specified as one of your user names.
-inventory on | off
(Optional) Specifies whether this recipient receives inventory email notifications.
userid_or_name
(Required) Specifies the email recipient for whom you are modifying settings.
Description
This command modifies the settings that are established for an email recipient. Standard rules regarding
names apply; therefore, it is not possible to change a name to emailusern, where n is a number.
Note: Before the usertype parameter can be set to support, the -warning and -info flags must be set to
off.
An invocation example
The following example modifies email settings for email recipient manager2008:
chemailuser -usertype local manager2008
An invocation example
chsnmpserver
Use the chsnmpserver command to modify the parameters of an existing SNMP server.
Syntax
chsnmpserver
-name server_name -ip ip_address
snmp_server_name
-info on -port port snmp_server_id
off
Parameters
-name server_name
(Optional) Specifies a name to assign to the SNMP server. The name must be unique. When
specifying a server name, snmp is a reserved word.
-ip ip_address
(Optional) Specifies an IP address to assign to the SNMP server. This must be a valid IPv4 or IPv6
address.
-community community
(Optional) Specifies the community name for the SNMP server.
-error on | off
(Optional) Specifies whether the server receives error notifications. Set to on, error notifications are
sent to the SNMP server. Set to off, error notifications are not sent to the SNMP server.
-warning on | off
(Optional) Specifies whether the server receives warning notifications. Set to on, warning notifications
are sent to the SNMP server. Set to off, warning notifications are not sent to the SNMP server.
-info on | off
(Optional) Specifies whether the server receives information notifications. Set to on, information
notifications are sent to the SNMP server. Set to off, information notifications are not sent to the
SNMP server.
-port port
(Optional) Specifies the remote port number for the SNMP server. This must be a value of 1 - 65535.
snmp_server_name | snmp_server_id
(Required) Specifies the name or ID of the server to be modified.
Description
Use this command to change the settings of an existing SNMP server. You must specify either the current
name of the server or the ID returned at creation time. Use the lssnmpserver command to obtain this ID.
An invocation example
chsnmpserver -name newserver 0
chsyslogserver
Use the chsyslogserver command to modify the parameters of an existing syslog server.
Syntax
chsyslogserver
-name server_name -ip ip_address
170 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-facility facility -error on -warning on
off off
syslog_server_name
-info on syslog_server_id
off
Parameters
-name server_name
(Optional) Specifies a name to assign to the syslog server. The name must be unique. When
specifying a server name, syslog is a reserved word.
-ip ip_address
(Optional) Specifies an IP address to assign to the syslog server. This must be a valid IPv4 or IPv6
address.
-facility facility
(Optional) Specifies a facility number to identify the origin of the message to the receiving server.
Servers configured with facility values of 0 - 3 receive syslog messages in concise format. Servers
configured with facility values of 4 - 7 receive syslog messages in fully-expanded format.
-error on | off
(Optional) Specifies whether the server receives error notifications. Set to on, error notifications are
sent to the syslog server. Set to off, error notifications are not sent to the syslog server.
-warning on | off
(Optional) Specifies whether the server receives warning notifications. Set to on, warning notifications
are sent to the syslog server. Set to off, warning notifications are not sent to the syslog server.
-info on | off
(Optional) Specifies whether the server receives information notifications. Set to on, information
notifications are sent to the syslog server. Set to off, information notifications are not sent to the
syslog server.
syslog_server_name | syslog_server_id
(Required) Specifies the name or ID of the server to be modified.
Description
Use this command to change the settings of an existing syslog server. You must specify either the current
name of the server or the ID returned at creation time. Use the lssyslogserver command to obtain this
ID.
An invocation example
chsyslogserver -facility 5 2
mkemailserver
Use the mkemailserver command to create an email server object that describes a remote Simple Mail
Transfer Protocol (SMTP) email server.
Parameters
-name server_name
(Optional) Specifies a unique name to assign to the email server object. The name must be a 1-
through 63-character string, and cannot start with a hyphen or number. If a name is not specified,
then a system default of emailservern is applied, where n is the object ID. When specifying a server
name, emailserver is a reserved word.
-ip ip_address
(Required) Specifies the IP address of a remote email server. This must be a valid IPv4 or IPv6
address. IPv6 addresses can be zero compressed.
-port port
(Optional) Specifies the port number for the email server. This must be a value of 1 - 65535. The
default value is 25.
Description
This command creates an email server object that represents the SMTP server. The SAN Volume
Controller uses the email server to send event notification and inventory emails to email users. It can
transmit any combination of error, warning, and informational notification types.
The SAN Volume Controller supports up to six email servers to provide redundant access to the external
email network. The email servers are used in turn until the email is successfully sent from the SAN
Volume Controller. The attempt is successful when the SAN Volume Controller gets a positive
acknowledgement from an email server that the email has been received by the server.
An invocation example
mkemailserver -ip 2.2.2.2 -port 78
mkemailuser
Use the mkemailuser command to add a recipient of email event and inventory notifications to the email
event notification facility. Add up to twelve recipients (one recipient at a time).
Syntax
mkemailuser -address user_address
-name user_name
-usertype support
local on on
-error off -warning off
on on
-info off -inventory off
172 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Parameters
-name user_name
(Optional) Specifies the name of the person who is the recipient of email event notifications. The
user_name value must be unique, must not contain spaces, and must not contain only numbers. If you
do not specify a user name, the system automatically assigns a user name in the format of
emailusern, where n is a number beginning with 0 (emailuser0, emailuser1, and so on).
The name emailusern, where n is a number, is reserved and cannot be used as one of your user
names.
-address user_address
(Required) Specifies the email address of the person receiving the email event or inventory
notifications, or both. The user_address value must be unique.
-usertype support| local
(Required) Specifies the type of user, either support or local, based on the following definitions:
support
Address of the support organization that provides vendor support.
local All other addresses.
-error on | off
(Optional) Specifies whether the recipient receives error-type event notifications. Set to on, error-type
event notifications are sent to the email recipient. Set to off, error-type event notifications are not sent
to the recipient. The default value is on.
-warning on | off
(Optional) Specifies whether the recipient receives warning-type event notifications. Set to on,
warning-type event notifications are sent to the email recipient. Set to off, warning-type event
notifications are not sent to the recipient. The default value is on.
-info on | off
(Optional) Specifies whether the recipient receives informational event notifications. Set to on,
informational event notifications are sent to the email recipient. Set to off, informational event
notifications are not sent to the recipient. The default value is on.
-inventory on | off
(Optional) Specifies whether this recipient receives inventory email notifications. The default value is
off.
Description
This command adds email recipients to the email event and inventory notification facility. You can add
up to twelve recipients, one recipient at a time. When an email user is added, if a user name is not
specified, a default name is allocated by the system. This default name has the form of emailuser1,
emailuser2, and so on. Email notification starts when you process the startemail command.
Note: Before you can set the usertype parameter to support, turn the -warning and -info flags off.
An invocation example
mkemailuser -address [email protected] -error on -usertype local
Syntax
mksnmpserver -ip ip_address
-name server_name
-community community on on
-error off -warning off
on -port port
-info off
Parameters
-name server_name
(Optional) Specifies a unique name to assign to the SNMP server. If a name is not specified, then a
system default of snmpn is applied, where n is the ID of the server. When specifying a server name,
snmp is a reserved word.
-ip ip_address
(Required) Specifies the IP address of the SNMP server. This must be a valid IPv4 or IPv6 address.
-community community
(Optional) Specifies the community name for the SNMP server. If you do not specify a community
name, then the default name of public is used.
-error on | off
(Optional) Specifies whether the server receives error notifications. Set to on, error notifications are
sent to the SNMP server. Set to off, error notifications are not sent to the SNMP server. The default
value is on.
-warning on | off
(Optional) Specifies whether the server receives warning notifications. Set to on, warning notifications
are sent to the SNMP server. Set to off, warning notifications are not sent to the SNMP server. The
default value is on.
-info on | off
(Optional) Specifies whether the server receives information notifications. Set to on, information
notifications are sent to the SNMP server. Set to off, information notifications are not sent to the
SNMP server. The default value is on.
-port port
(Optional) Specifies the remote port number for the SNMP server. This must be a value of 1 - 65535.
The default value is 162.
Description
An invocation example
mksnmpserver -ip 2.2.2.2 -port 78
174 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
The resulting output
SNMP Server id [2] successfully created
mksyslogserver
Use the mksyslogserver command to create a syslog server to receive notifications.
Syntax
mksyslogserver -ip ip_address
-name server_name
-facility facility on on
-error off -warning off
on
-info off
Parameters
-name server_name
(Optional) Specifies a unique name to assign to the syslog server. If a name is not specified, then a
system default of syslogn is applied, where n is the ID of the server. When specifying a server name,
syslog is a reserved word.
-ip ip_address
(Required) Specifies the Internet Protocol (IP) address of the syslog server. This must be a valid
Internet Protocol Version 4 (IPv4) or Internet Protocol Version 6 (IPv6) address.
-facility facility
(Optional) Specifies the facility number used in syslog messages. This number identifies the origin of
the message to the receiving server. Servers configured with facility values of 0 - 3 receive syslog
messages in concise format. Servers configured with facility values of 4 - 7 receive syslog messages in
fully-expanded format. The default value is 0.
-error on | off
(Optional) Specifies whether the server receives error notifications. Set to on, error notifications are
sent to the syslog server. Set to off, error notifications are not sent to the syslog server. The default
value is on.
-warning on | off
(Optional) Specifies whether the server receives warning notifications. Set to on, warning notifications
are sent to the syslog server. Set to off, warning notifications are not sent to the syslog server. The
default value is on.
-info on | off
(Optional) Specifies whether the server receives information notifications. Set to on, information
notifications are sent to the syslog server. Set to off, information notifications are not sent to the
syslog server. The default value is on.
Description
This command creates a syslog server to receive notifications. The syslog protocol is a client-server
standard for forwarding log messages from a sender to a receiver on an IP network. Syslog can be used
to integrate log messages from different types of systems into a central repository.
An invocation example
mksyslogserver -ip 1.2.3.4
rmemailserver
Use the rmemailserver command to delete the specified email server object.
Syntax
rmemailserver email_server_name
email_server_id
Parameters
email_server_name | email_server_id
(Required) Specifies the name or ID of the email server object to be deleted.
Description
Use this command to delete an existing email server object that describes a remote Simple Mail Transfer
Protocol (SMTP) email server. You must specify either the current name or the ID of the object returned at
creation time. Use the lsemailserver command to obtain this ID.
Note: Email service stops when the last email server is removed. Use the startemail command to
reactivate the email and inventory notification function after at least one email server has been
configured.
An invocation example
rmemailserver email4
rmemailuser
Use the rmemailuser command to remove a previously defined email recipient from the system.
Syntax
rmemailuser userid_or_name
Parameters
userid_or_name
(Required) Specifies the user ID or user name of the email recipient to remove.
176 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Description
An invocation example
An invocation example
rmsnmpserver
Use the rmsnmpserver command to delete the specified Simple Network Management Protocol (SNMP)
server.
Syntax
rmsnmpserver snmp_server_name
snmp_server_id
Parameters
snmp_server_name | snmp_server_id
(Required) Specifies the name or ID of the SNMP server to be deleted.
Description
Use this command to delete an existing SNMP server. You must specify either the current name of the
server or the ID returned at creation time. Use the lssnmpserver command to obtain this ID.
An invocation example
rmsnmpserver snmp4
rmsyslogserver
Use the rmsyslogserver command to delete the specified syslog server.
Parameters
syslog_server_name | syslog_server_id
(Required) Specifies the name or ID of the syslog server to be deleted.
Description
Use this command to delete an existing syslog server. You must specify either the current name of the
server or the ID returned at creation time. Use the lssyslogserver command to obtain this ID.
An invocation example
rmsyslogserver 2
sendinventoryemail
Use the sendinventoryemail command to send an inventory email notification to all email recipients able
to receive inventory email notifications. There are no parameters for this command.
Syntax
sendinventoryemail
Parameters
Description
This command sends an inventory email notification to all email recipients who are enabled to receive
inventory email notifications. This command fails if the startemail command has not been processed and
at least one email recipient using the email event and inventory notification facility has not been set up to
receive inventory email notifications. This command also fails if the email infrastructure has not been set
up.
An invocation example
In the following example, you send an inventory email notification to all email recipients who are
enabled to receive them:
sendinventoryemail
178 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
startemail
Use the startemail command to activate the email and inventory notification function. There are no
parameters for this command.
Syntax
startemail
Parameters
Description
This command enables the email event notification service. No emails are sent to users until the
startemail command has been run and at least one user has been defined to the system.
An invocation example
In the following example, you are starting the email error notification service.
startemail
stopemail
Use the stopemail command to stop the email and inventory notification function. There are no
parameters for this command.
Syntax
stopemail
Parameters
Description
This command stops the email error notification function. No emails are sent to users until the startemail
command is reissued.
An invocation example
In the following example, you have stopped the email and inventory notification function:
stopemail
Syntax
testemail userid_or_name
-all
Parameters
userid_or_name
(Required if you do not specify -all) Specifies the user ID or user name of the email recipient that you
want to send a test email to. You cannot use this parameter with the -all parameter. The
userid_or_name value must not contain spaces.
-all
(Required if you do not specify userid_or_name) Sends a test email to all email users configured to
receive notification of events of any notification type. No attempt is made to send the test email to an
email user who does not have any notification setting set to on.
Description
This command sends test emails to the specified email users. The email recipient expects to receive the
test email within a specified service time. If the email is not received within the expected time period, the
recipient must contact the administrator to ensure that the email settings for the user are correct. If there
is still a problem, you must contact the IBM Support Center.
The email recipient uses the test email to check that the Simple Mail Transfer Protocol (SMTP) name, the
IP address, the SMTP port, and the user address are valid.
An invocation example
180 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 13. Enclosure commands
Storwize V7000, Flex System V7000 Storage Node, Storwize V3500, and Storwize V3700 only: Enclosure
commands capture information that can assist you with managing enclosures.
addcontrolenclosure
Use the addcontrolenclosure command to add control enclosures to the clustered system.
Syntax
addcontrolenclosure -iogrp io_grp_id_or_name -sernum enclosure_serial_number
Parameters
-iogrp io_grp_id_or_name
The I/O group in which you want to put the control enclosure.
-sernum enclosure_serial_number
The serial number of the control enclosure you want to add.
Description
An invocation example
addcontrolenclosure -iogrp 0 -sernum 2361443
chenclosure
Use the chenclosure command to modify enclosure properties.
Syntax
chenclosure -identify yes|no enclosure_id
-managed yes|no
-id enclosure_id
Parameters
Note: Optional parameters are mutually exclusive. Exactly one of the optional parameters must be set.
-identify yes|no
(Optional) Causes the identify LED start or stop flashing.
-managed yes|no
(Optional) Changes the enclosure to a managed or unmanaged enclosure.
Description
chenclosurecanister
Use the chenclosurecanister command to modify the properties of an enclosure canister.
Syntax
chenclosurecanister -excludesasport yes|no -port 1|2
-force
-identify yes|no
Note:
1. The -port and -excludesasport parameters must be specified together.
2. Exactly one of the optional parameters must be set.
Parameters
Note: Using the -force flag might result in loss of access to your data.
-port 1 | 2
(Optional) The SAS port to include or exclude.
canister_id
The canister you want to apply the change to.
enclosure_id
The enclosure in which the canister is a member.
182 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Description
Results
No feedback
Results
No feedback
chenclosureslot
Use the chenclosureslot command to modify the properties of an enclosure slot.
Syntax
chenclosureslot -slot slot_id
-identify yes|no
-exclude yes|no -port port_id -force
enclosure_id
Note:
1. Optional parameters are mutually exclusive.
2. You can only specify the port parameter or the -force parameter when you also specify the -exclude
parameter.
3. Exactly one of the optional parameters must be set.
4. The -force flag will only have an effect on the operation of -exclude yes .
Parameters
-identify yes|no
Change the state of fault light-emitting diode (LED)megadsss to or from slow_flashing.
-exclude yes|no
(Optional) Ensures that an enclosure slot port is excluded. The following list gives details of the
options you can use with this parameter:
v -exclude yes-port port_id -slot slot_id enclosureid: The port you specify with port_id will be excluded.
If the current state of the port is excluded_by_enclosure, excluded_by_drive, or
excluded_by_cluster, this command will appear to have no affect. However, if the current state of
the port is online, then that state will change to excluded_by_cluster. The port will remain
excluded until you rerun this command with no selected.
Attention: This command will check for dependent volumes. If issuing this command would
result in losing access to data, then the command will fail and an error message will display. You
can use the -force flag to ignore these errors, but this could result in loss of access to data.
Important: Using the force parameter might result in a loss of access. Use it only under the direction of
the IBM Support Center.
Description
The results:
No feedback
The results:
No feedback
lsenclosure
Use the lsenclosure command to view a summary of the enclosures.
Syntax
lsenclosure
enclosure_id -delim delimiter
Parameters
enclosure_id
Detailed information for the enclosure that you specify.
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
184 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
Description
This command enables you to view a summary of the enclosures (including current status information for
canisters and power and cooling units, and other enclosure attributes). Table 23 shows the possible
outputs:
Table 23. lsenclosure output
Attribute Description
id The ID of the enclosure.
status Indicates if an enclosure is visible to the SAS network:
v online: a managed or unmanaged enclosure is visible
v offline: a managed enclosure is not visible, and other fields hold their last known
values.
v degraded: if an enclosure is visible, but not down both strands
type The type of enclosure:
v control
v expansion
managed Whether the enclosure is managed:
v yes
v no
IO_group_id The I/O group the enclosure belongs to; blank if canisters are connected to two
different I/O groups.
IO_group_name The I/O group the enclosure belongs to; blank if canisters are connected to two
different I/O groups.
fault_LED The status of the fault light-emitting diode (LED) on the enclosure:
v on: a service action is required immediately on the enclosure or a component within
the enclosure (including a canister, power unit, or non-spared drive).
v slow_flashing: there is insufficient battery power to run I/O
v off: there are faults on the enclosure or its components
identify_LED The state of the identify LED:
v off: the enclosure is not identified
v slow_flashing: the enclosure is being identified
error_sequence_number Indicates the error log number of the highest priority error for this object. This is
typically blank; however, if there is a problem (for example, the status has degraded),
then it contains the sequence number of that error.
product_MTM The product machine type and model.
serial_number The serial number of the enclosure. This is the product serial number, which indicates
the enclosure and its contents. The enclosure has its own serial number, which is
embedded in the FRU_identity 11S data.
FRU_part_number The FRU part number of the enclosure.
FRU_identity The 11S serial number that combines the manufacturing part number and the serial
number.
total_canisters The maximum number of canisters for this enclosure type.
online_canisters The number of canisters contained in this enclosure that are online.
An invocation example
lsenclosure -delim :
serial_number 64G005S
FRU_part_number 85Y5896
FRU_identity 11S85Y5962YHU9994G005S
total_canisters 2
online_canisters 2
total_PSUs 2
online_PSUs 2
drive_slots 12
firmware_level_1 10
firmware_level_2 F6C07926
machine_part_number 2072L2C
lsenclosurebattery
Use the lsenclosurebattery command to display information about the batteries in the enclosure power
supply units (PSUs).
Syntax
lsenclosurebattery
-delim delimiter
186 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
enclosure_id
-battery battery_id
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum width of each item of data. A detailed view provides each item of data
in its own row, and if the headers are displayed, the data is separated from the header by a space.
The -delim parameter overrides this behavior. Valid input for the -delim parameter is a one-byte
character. If you enter -delim : on the command line, the colon character (:) separates all items of
data in a concise view; for example, the spacing of columns does not occur. In a detailed view, the
data is separated from its header by the specified delimiter.
-battery battery_id enclosure_id
(Optional) Provides a detailed view of the specified enclosure battery. Valid only when an enclosure
is specified.
enclosure_id
(Optional) Lists the batteries for the specified enclosure.
Description
This command displays information about the batteries in the enclosure PSUs. The concise view displays
a line for each battery slot in every control enclosure, regardless of whether they exist. Batteries are not
shown for expansion enclosures. Table 24 shows possible outputs.
Table 24. lsenclosurebattery outputs
Attribute Description
enclosure_id The identity of the enclosure that contains the battery.
battery_id Identifies the battery in the enclosure.
status The status of the battery:
v online: The battery is present and working as usual
v degraded: The battery is present but not working as usual
v offline: The battery cannot be detected
charging_status The charging state of the battery:
v idle: the battery is not charging nor discharging
v charging: the battery is charging
v reconditioning: the battery is reconditioning itself, by being discharged and
then recharged
Important: A battery is unavailable when in reconditioning state. Reconditioning
occurs:
v Every three months
v When a battery is used for (at least) two power failures
Reconditioning takes approximately 12 hours.
recondition_needed The battery needs to be reconditioned, but cannot be reconditioned because of
one or more errors.
percent_charged Indicates the charge of battery (in a percentage).
end_of_life_warning The battery is reaching its end of life warning, and needs to be replaced:
v yes
v no
An invocation example
lsenclosurebattery -delim :
lscontrolenclosurecandidate
Use the lscontrolenclosurecandidate command to display a list of all control enclosures you can add to
the current system.
Syntax
lscontrolenclosurecandidate
Parameters
None.
Description
Table 25 provides the possible values that are applicable to the attributes that are displayed as data in the
output views.
Table 25. lscontrolenclosurecandidate attribute values
Attribute Value
serial_number The serial number for the enclosure.
product_MTM The MTM for the enclosure.
lsenclosurecanister
Use the lsenclosurecanister command to view a detailed status for each canister in an enclosure.
188 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Syntax
lsenclosurecanister
-delim delimiter
enclosure_id
-canister canister_id
Parameters
enclosure_id
Lists the canisters for the specified enclosure.
-canister canister_id
Valid only when the enclosure_id is specified. Provides a detailed view of the canister for the
specified enclosure.
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
Description
This command enables you to view a detailed status for each canister in an enclosure. Table 26 shows the
possible outputs:
Table 26. lsenclosurecanister output
Attribute Description
enclosure_id The identity of the enclosure that contains the canister.
canister_id Identifies which of the canisters in the enclosure this is.
status The status of the canister:
v online: the canister is present and working normally.
v degraded: the canister is present but not working normally
v offline: the canister could not be detected.
type The type of canister:
v node
v expansion
node_id The node that corresponds to this canister; blank if the canister is not a node, or if the
node is offline or not part of the clustered system.
node_name The node that corresponds to this canister; blank if the canister is not a node, or if the
node is offline or not part of the clustered system.
FRU_part_number The field-replaceable unit (FRU) part number of the canister.
FRU_identity The 11S number that combines the manufacturing part number and the serial number.
WWNN The Fibre Channel worldwide node name (WWNN) of the canister (node canisters
only).
temperature (0 to 245) The temperature of the canister (in degrees Celsius). If the temperature goes
below 0, 0 will be displayed.
An invocation example
lsenclosurecanister -delim :
A detailed example
lsenclosurecanister -canister 1 1
190 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
node_id 1
node_name node1
FRU_part_number AAAAAAA
FRU_identity 11S1234567Y12345678901
WWNN 5005076801005F94
firmware_level XXXXXXXXXX
temperature 23
fault_LED flashing
SES_status online
error_sequence_number
SAS_port_1_status online
SAS_port_2_status online
firmware_level_2 0501
firmware_level_3 14
firmware_level_4 B69F66FF
firmware_level_5 5C2A6A44
lsenclosurepsu
Use the lsenclosurepsu command to view information about each power-supply unit (PSU) in the
enclosure.
Syntax
lsenclosurepsu
-psu psu_id enclosure_id -delim delimiter
Parameters
enclosure_id
(Optional) Lists the PSUs for the specified enclosure.
-psu psu_id
(Optional) Valid only when the enclosure_id is specified. Provides a detailed view of the PSU for the
specified enclosure.
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
Description
This command enables you to view information about each power-supply unit (PSU) in the enclosure.
Table 27 shows the possible outputs:
Table 27. lsenclosurepsu output
Attribute Description
enclosure_id The ID of the enclosure containing the PSU.
psu_id The ID of the PSU in the enclosure.
An invocation example
lsenclosurepsu -delim :
192 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
FRU_part_number 85Y5847
FRU_identity 11S85Y5847YG50CG07W0LJ
firmware_level_1 0314
firmware_level_2 AF9293E5
lsenclosureslot
Use the lsenclosureslot command to view information about each drive slot in the enclosure.
Syntax
lsenclosureslot
-delim delimiter -nohdr
-slot slot_id enclosure_id
enclosure_id
Parameters
-delim delimiter
(Optional) By default in a concise view, all columns of data are space-separated. The width of each
column is set to the maximum possible width of each item of data. In a detailed view, each item of
data has its own row, and if the headers are displayed, the data is separated from the header by a
space. The -delim parameter overrides this behavior. Valid input for the -delim parameter is a
one-byte character. If you enter -delim : on the command line, the colon character (:) separates all
items of data in a concise view; for example, the spacing of columns does not occur. In a detailed
view, the data is separated from its header by the specified delimiter.
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. This parameter suppresses the display of these headings.
-slot slot_id
(Optional) Valid only when an enclosure is specified. Gives detailed view for that enclosure slot.
enclosure_id
(Optional) Lists slots for that enclosure. Must be specified if -slot is used.
Description
This command enables you to view information about each drive slot in the enclosure, such as whether a
drive is present, and the port status for that drive. Table 28 shows the possible outputs:
Table 28. lsenclosureslot output
Attribute Description
enclosure_id The identity of the enclosure which contains the drive slot.
slot_id Identifies which of the drive slots in the enclosure this is.
port_1_status The status of enclosure slot port 1. If the port is bypassed for multiple reasons, only one
is shown. In order of priority, they are:
v online: enclosure slot port 1 is online
v excluded_by_drive: the drive excluded the port
v excluded_by_enclosure: the enclosure excluded the port
v excluded_by_system: the clustered system (system) has excluded the port
An invocation example
lsenclosureslot -delim :
194 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
1:21:online:online:yes:8
1:22:online:online:yes:0
1:23:online:online:yes:3
1:24:online:online:yes:2
triggerenclosuredump
Use the triggerenclosuredump command to force the specified enclosure or enclosures to dump data.
Syntax
triggerenclosuredump -port port_id -iogrp iogrp_id_or_name
-enclosure enclosure_id
Note:
1. You can only use one of the optional parameters (-port or -enclosure).
2. If -port is specified, -iogrp must also be specified.
3. If -iogrp is specified, -port must also be specified.
Parameters
-port port_id
(Optional) If the system is wired correctly, this value is identical to the ID of the chain with the
enclosures you want to dump. If the system is wired incorrectly, all the enclosures connected to port
port_id of either node canister are dumped.
-iogrp iogrp_id_or_name
(Optional) The ID or name of the I/O group the control enclosure belongs to.
-enclosure enclosure_id
(Optional) The ID of the enclosure you want to dump.
Description
This command requests the canisters in the enclosure or enclosures specified to dump data. The dumped
data is subsequently collected and moved to /dumps/enclosure on the nodes that are connected to the
enclosure. There is one file for each canister successfully dumped and they may be located on different
nodes. Dumps are for use by the IBM Support Center, which has the tools to interpret the dump data.
Use the cpdumps command to copy the files from the system. This command does not disrupt access to
the enclosures.
196 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 14. Licensing commands
The licensing commands enable you to work with SAN Volume Controller licensed functions.
chlicense
Use the chlicense command to change license settings for clustered system (system) features.
Syntax
chlicense
-flash capacity_TB -remote capacity_TB
-virtualization capacity_TB -physical_flash on
off
-physical_remote on -physical_disks number
off
-compression compression_setting
Parameters
-flash capacity_TB
(Optional) Changes system licensing for the FlashCopy feature. To change the licensed capacity for
the FlashCopy feature, specify a capacity in terabytes (TB).
Note: Only use the optional flash parameter with the SAN Volume Controller.
-remote capacity_TB
(Optional) Changes system licensing for the Metro Mirror and Global Mirror feature. To change the
licensed capacity for the Metro Mirror and Global Mirror feature, specify a capacity in terabytes (TB).
Note: For Storwize V7000, specify the total number of internal and external enclosures that you have
licensed on your system. You must have a Remote Mirroring license for all enclosures.
-virtualization capacity_TB
(Optional) Changes system licensing for the Virtualization feature. To change the licensed capacity for
the Virtualization feature, specify a capacity in terabytes (TB).
Note: For Storwize V7000, specify the number of enclosures of external storage that you have been
authorized by IBM to use.
-physical_flash on | off
(Optional) For physical disk licensing, enables or disables the FlashCopy feature. The default value is
off.
-physical_remote on | off
(Optional) For physical disk licensing, enables or disables the Metro Mirror and Global Mirror
feature. The default value is off.
Note: Not all SAN Volume Controller systems support compression. However, you can set a
compression license value on a system that has no nodes that support compression.
Note:
v If the -physical_disks value is set to zero, the -physical_flash and -physical_remote values are
turned off.
v If the -physical_disks value is nonzero, the -flash, -remote, and -virtualization values cannot be
set.
v If the -physical_disks value is nonzero, only the FlashCopy and RemoteCopy usage is monitored and
appropriate error messages are logged.
v If the -flash, -remote, or -virtualization values are nonzero, the -physical_flash, -physical_remote,
and -physical_disks values cannot be set.
Description
The chlicense command changes license settings for the system. Any change that is made is logged as an
event in the license setting log.
For Storwize V7000, the enclosure license already includes virtualization of internal drives on your
system. You can use this command to set any additional options. The total amounts for your system or
systems must not exceed the total capacity authorization that you have obtained from IBM.
For SAN Volume Controller the default is to have no copy services functions licensed, but this does not
stop you from creating and using Copy Services. However, errors are placed in the license settings log
that state that you are using an unlicensed feature. The command-line tool return code also notifies you
that you are using an unlicensed feature.
For Storwize V7000, the default is to have no Metro Mirror or Global Mirror function licensed, but this
does not stop you from creating and using Copy Services. However, errors are placed in the license
settings log that state that you are using an unlicensed feature. The command-line tool return code also
notifies you that you are using an unlicensed feature.
The total virtualized capacity can also be modified with this command. This is the number of terabytes
(TB) of virtual disk capacity that can be configured by the system.
When you reach 90% capacity, any attempt to create or extend Virtual Disks, Relationships, or Mappings
results in a message from the command-line tool. This does not stop you from creating and expanding
Virtual Disks, Relationships, or Mappings. When usage reaches or exceeds 100% capacity, errors are
placed in the license settings log.
Any error that is placed in the license settings log results in a generic error being placed in the system
error log. This occurs when you issue a command that violates the license agreement. The return code
also notifies you that you are violating the license settings.
An invocation example
198 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
chlicense -remote 5
dumpinternallog
Use the dumpinternallog command to dump the contents of the license settings error and event log to a
file on the current configuration node.
Syntax
dumpinternallog
Description
This command dumps the contents of the internal license settings error and event log to a file on the
current configuration node.
This file is always called feature.txt and is created, or overwritten, in the /dumps/feature directory on the
configuration node.
Before making any entries, the license settings log contains only zeros. A dump of this log from the
dumpinternallog command results in an empty file.
An invocation example
dumpinternallog
chfcconsistgrp
Use the chfcconsistgrp command to change the name of a consistency group or marks the group for
auto-deletion.
Syntax
chfcconsistgrp
-name new_name_arg -autodelete on | off
fc_consist_group_id
fc_consist_group_name
Parameters
-name new_name_arg
(Optional) Specifies the new name to assign to the consistency group.
-autodelete on | off
(Optional) Deletes the consistency group when the last mapping that it contains is deleted or
removed from the consistency group.
fc_consist_group_id | fc_consist_group_name
(Required) Specifies the ID or existing name of the consistency group that you want to modify.
Description
The chfcconsistgrp command changes the name of a consistency group, marks the group for
auto-deletion, or both.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
chfcconsistgrp -name testgrp1 fcconsistgrp1
chfcmap
Use the chfcmap command to modify attributes of an existing mapping.
Syntax
chfcmap
-name new_name_arg -force
fc_map_id
-autodelete on -cleanrate rate fc_map_name
off
Parameters
-name new_name_arg
(Optional) Specifies the new name to assign to the mapping. The -name parameter cannot be used
with any other optional parameters.
-force
(Optional) Specifies that the mapping be modified to a stand-alone mapping (equivalent to creating
the mapping without a consistency group ID). You cannot specify the -force parameter with the
-consistgrp parameter.
-consistgrp consist_group_id | consist_group_name
(Optional) Specifies the consistency group for which you want to modify the mapping. You cannot
specify the -consistgrp parameter with the -force parameter.
Note: The consistency group cannot be modified if the specified consistency group is in the
preparing, prepared, copying, suspended, or stopping state.
-copyrate rate
(Optional) Specifies the copy rate. The rate value can be 0 - 100. The default value is 50. A value of 0
indicates no background copy process. For the supported -copyrate values and their corresponding
rates, see Table 29 on page 203.
-autodelete on | off
(Optional) Specifies that the autodelete function be turned on or off for the specified mapping. When
you specify the -autodelete on parameter, you are deleting a mapping after the background copy
completes. If the background copy is already complete, the mapping is deleted immediately.
-cleanrate rate
(Optional) Sets the cleaning rate for the mapping. The rate value can be 0 - 100. The default value is
50.
fc_map_id | fc_map_name
(Required) Specifies the ID or name of the mapping to modify. Enter the ID or name last on the
command line.
Description
Attention: You must enter the fc_map_id | fc_map_name last on the command line.
If you have created several FlashCopy mappings for a group of VDisks that contain elements of data for
the same application, you can assign these mappings to a single FlashCopy consistency group. You can
then issue a single prepare command and a single start command for the whole group, for example, so
that all of the files for a particular database are copied at the same time.
The copyrate parameter specifies the copy rate. If 0 is specified, background copy is disabled. The
cleanrate parameter specifies the rate for cleaning the target VDisk. The cleaning process is only active if
the mapping is in the copying state and the background copy has completed, the mapping is in the
copying state and the background copy is disabled, or the mapping is in the stopping state. You can
202 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
disable cleaning when the mapping is in the copying state by setting the cleanrate parameter to 0. If the
cleanrate is set to 0, the cleaning process runs at the default rate of 50 when the mapping is in the
stopping state to ensure that the stop operation completes.
Table 29 provides the relationship of the copy rate and cleaning rate values to the attempted number of
grains to be split per second. A grain is the unit of data represented by a single bit.
Table 29. Relationship between the rate, data rate and grains per second values
User-specified rate
attribute value Data copied/sec 256 KB grains/sec 64 KB grains/sec
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 MB 64 256
81 - 90 32 MB 128 512
91 - 100 64 MB 256 1024
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
chfcmap -name testmap 1
mkfcconsistgrp
Use the mkfcconsistgrp command to create a new FlashCopy consistency group and identification name.
Syntax
mkfcconsistgrp
-name consist_group_name -autodelete
Parameters
-name consist_group_name
(Optional) Specifies a name for the consistency group. If you do not specify a consistency group
name, a name is automatically assigned to the consistency group. For example, if the next available
consistency group ID is id=2, the consistency group name is fccstgrp2.
-autodelete
(Optional) Deletes the consistency group when the last mapping that it contains is deleted or
removed from the consistency group.
This command creates a new consistency group and identification name. The ID of the new group is
displayed when the command process completes.
If you have created several FlashCopy mappings for a group of VDisks (volumes) that contain elements
of data for the same application, you might find it convenient to assign these mappings to a single
FlashCopy consistency group. You can then issue a single prepare command and a single start command
for the whole group, for example, so that all of the files for a particular database are copied at the same
time.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
Remember: Names representing Metro Mirror or Global Mirror consistency groups relationships are
restricted to fifteen characters in length (not sixty-three for an extended character set).
An invocation example
mkfcconsistgrp
mkfcmap
Use the mkfcmap command to reate a new FlashCopy mapping, which maps a source VDisk (volume) to
a target volume for subsequent copying.
Syntax
mkfcmap -source src_vdisk_id -target target_vdisk_id
src_vdisk_name target_vdisk_name
-name new_name_arg -consistgrp consist_group_id
consist_group_name
-copyrate rate -autodelete -grainsize 64 -incremental
256
-cleanrate rate -iogrp iogroup_name
iogroup_id
Parameters
-source src_vdisk_id | src_vdisk_name
(Required) Specifies the ID or name of the source volume.
-target target_vdisk_id | target_vdisk_name
(Required) Specifies the ID or name of the target volume.
-name new_name_arg
(Optional) Specifies the name to assign to the new mapping.
-consistgrp consist_group_id | consist_group_name
(Optional) Specifies the consistency group to add the new mapping to. If you do not specify a
consistency group, the mapping is treated as a stand-alone mapping.
204 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-copyrate rate
(Optional) Specifies the copy rate. The rate value can be 0 - 100. The default value is 50. A value of 0
indicates no background copy process. For the supported -copyrate values and their corresponding
rates, see Table 30 on page 206.
-autodelete
(Optional) Specifies that a mapping be deleted when the background copy completes. The default,
which applies if this parameter is not entered, is that autodelete is set to off.
-grainsize 64 | 256
(Optional) Specifies the grain size for the mapping. The default value is 256. Once set, this value
cannot be changed.
-incremental
(Optional) Marks the FlashCopy mapping as an incremental copy. The default is nonincremental.
Once set, this value cannot be changed.
-cleanrate rate
(Optional) Sets the cleaning rate for the mapping. The rate value can be 0 - 100. The default value is
50.
-iogrp iogroup_name | iogroup_id
(Optional) Specifies the I/O group for the FlashCopy bitmap. Once set, this value cannot be changed.
The default I/O group is either the source volume, if a single target map, or the I/O group of the
other FlashCopy mapping to which either the source or target VDisks (volumes) belong.
Note: If not enough bitmap space is available to complete this command, more space will
automatically be allocated in the bitmap memory (unless you have already reached the maximum
bitmap memory).
Description
This command creates a new FlashCopy mapping. This mapping persists until it is manually deleted, or
until it is automatically deleted when the background copy completes and the autodelete parameter set
to on. The source and target VDisks (volumes) must be specified on the mkfcmap command. The mkfcmap
command fails if the source and target volumes are not identical in size. Issue the lsvdisk -bytes
command to find the exact size of the source volume for which you want to create a target disk of the
same size. The target volume that you specify cannot be a target volume in an existing FlashCopy
mapping. A mapping cannot be created if the resulting set of connected mappings exceeds 256 connected
mappings.
The mapping can optionally be given a name and assigned to a consistency group, which is a group of
mappings that can be started with a single command. These are groups of mappings that can be
processed at the same time. This enables multiple VDisks (volumes) to be copied at the same time, which
creates a consistent copy of multiple disks. This consistent copy of multiple disks is required by some
database products in which the database and log files reside on different disks.
If the specified source and target VDisks (volumes) are the target and source volumes, respectively, of an
existing mapping, then the mapping being created and the existing mapping become partners. If one
mapping is created as incremental, then its partner is automatically incremental. A mapping can have
only one partner.
You can create a FlashCopy mapping in which the target volume is a member of a Metro Mirror or
Global Mirror relationship, unless one of the following conditions applies:
v The relationship is with a clustered system that is running an earlier code level.
v The I/O group for the mapping is different than the I/O group for the proposed mapping target
volume.
Table 30 provides the relationship of the copy rate and cleaning rate values to the attempted number of
grains to be split per second. A grain is the unit of data represented by a single bit.
Remember: If either the specified source or target volume is defined as a change volume for a
relationship, mkfcmap is not successful.
Table 30. Relationship between the rate, data rate and grains per second values
User-specified rate
attribute value Data copied/sec 256 KB grains/sec 64 KB grains/sec
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 MB 64 256
81 - 90 32 MB 128 512
91 - 100 64 MB 256 1024
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
mkfcmap -source 0 -target 2 -name mapone
prestartfcconsistgrp
Use the prestartfcconsistgrp command to prepare a consistency group (a group of FlashCopy
mappings) so that the consistency group can be started. This command flushes the cache of any data that
is destined for the source volume and forces the cache into the write-through mode until the consistency
group is started.
Syntax
prestartfcconsistgrp fc_consist_group_id
-restore fc_consist_group_name
206 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Parameters
-restore
(Optional) Specifies the restore flag. This forces the consistency group to be prepared even if the
target volume of one of the mappings in the consistency group is being used as a source volume of
another active mapping. An active mapping is in the copying, suspended, or stopping state.
fc_consist_group_id | fc_consist_group_name
(Required) Specifies the name or ID of the consistency group that you want to prepare.
Description
This command prepares a consistency group (a group of FlashCopy mappings) to subsequently start. The
preparation step ensures that any data that resides in the cache for the source volume is first flushed to
disk. This step ensures that the FlashCopy target volume is identical to what has been acknowledged to
the host operating system as having been written successfully to the source volume.
You can use the restore parameter to force the consistency group to be prepared even if the target
volume of one or more mappings in the consistency group is being used as a source volume of another
active mapping. In this case the mapping restores as shown in the lsfcmap view. If the restore parameter
is specified when preparing a consistency group where none of the target volumes are the source volume
of another active mapping, then the parameter is ignored.
You must issue the prestartfcconsistgrp command to prepare the FlashCopy consistency group before
the copy process can be started. When you have assigned several mappings to a FlashCopy consistency
group, you must issue a single prepare command for the whole group to prepare all of the mappings at
once.
The consistency group must be in the idle_or_copied or stopped state before it can be prepared. When
you enter the prestartfcconsistgrp command, the group enters the preparing state. After the
preparation is complete, the consistency group status changes to prepared. At this point, you can start the
group.
If FlashCopy mappings are assigned to a consistency group, the preparing and the subsequent starting of
the mappings in the group must be performed on the consistency group rather than on an individual
FlashCopy mapping that is assigned to the group. Only stand-alone mappings, which are mappings that
are not assigned to a consistency group, can be prepared and started on their own. A FlashCopy
consistency group must be prepared before it can be started.
This command is rejected if the target of a FlashCopy mapping in the consistency group is in a Metro
Mirror or Global Mirror relationship, except where the relationship is one of the following types and is
the secondary target of the remote copy:
v idling
v disconnected
v consistent_stopped
v inconsistent_stopped
The FlashCopy(r) mapping also fails in the following cases:
v You use the prep parameter.
v The target volume is an active remote copy primary or secondary volume.
v The FlashCopy target (and remote copy primary target) volume is offline. If this occurs, the FlashCopy
mapping stops and the target volume remains offline.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
prestartfcmap
Use the prestartfcmap command to prepare a FlashCopy mapping so that it can be started. This
command flushes the cache of any data that is destined for the source volume and forces the cache into
the write-through mode until the mapping is started.
Syntax
prestartfcmap fc_map_id
-restore fc_map_name
Parameters
-restore
(Optional) Specifies the restore flag. This forces the mapping to be prepared even if the target volume
is being used as a source volume in another active mapping. An active mapping is in the copying,
suspended, or stopping state.
fc_map_id | fc_map_name
(Required) Specifies the name or ID of the mapping to prepare.
Description
This command prepares a single mapping for subsequent starting. The preparation step ensures that any
data that resides in the cache for the source volume is first transferred to disk. This step ensures that the
copy that is made is consistent with what the operating system expects on the disk.
The restore parameter can be used to force the mapping to be prepared even if the target volume is
being used as a source volume of another active mapping. In this case, the mapping is restoring as
shown in the lsfcmap view. If the restore parameter is specified when preparing a mapping where the
target volume is not the source volume of another active mapping, then the parameter is ignored.
Note: To prepare a FlashCopy mapping that is part of a consistency group, you must use the
prestartfcconsistgrp command.
The mapping must be in the idle_or_copied or stopped state before it can be prepared. When the
prestartfcmap command is processed, the mapping enters the preparing state. After the preparation is
complete, it changes to the prepared state. At this point, the mapping is ready to start.
This command is rejected if the target of the FlashCopy mappings is the secondary volume in a Metro
Mirror or Global Mirror relationship (so that the FlashCopy target is the remote copy secondary).
Note: If the remote copy is idling or disconnected, even if the FlashCopy and remote copy are pointing
to the same volume, the auxiliary volume is not necessarily the secondary volume. In this case, you can
start a FlashCopy mapping.
The FlashCopy mapping also fails in the following cases:
v The remote copy is active.
208 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
v The FlashCopy target (and remote copy primary target) volume is offline. If this occurs, the FlashCopy
mapping stops and the target volume remains offline.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
prestartfcmap 1
rmfcconsistgrp
Use the rmfcconsistgrp command to delete a FlashCopy consistency group.
Syntax
rmfcconsistgrp fc_consist_group_id
-force fc_consist_group_name
Parameters
-force
(Optional) Specifies that all of the mappings that are associated with a consistency group that you
want to delete are removed from the group and changed to stand-alone mappings. This parameter is
only required if the consistency group that you want to delete contains mappings.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
fc_consist_group_id | fc_consist_group_name
(Required) Specifies the ID or name of the consistency group that you want to delete.
Description
This command deletes the specified FlashCopy consistency group. If there are mappings that are
members of the consistency group, the command fails unless you specify the -force parameter. When you
specify the -force parameter, all of the mappings that are associated with the consistency group are
removed from the group and changed to stand-alone mappings.
To delete a single mapping in the consistency group, you must use the rmfcmap command.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
rmfcconsistgrp fcconsistgrp1
rmfcmap
Use the rmfcmap command to delete an existing mapping.
Parameters
-force
(Optional) Specifies that the target volume is brought online. This parameter is required if the
FlashCopy mapping is in the stopped state.
fc_map_id | fc_map_name
(Required) Specifies the ID or name of the FlashCopy mapping to delete. Enter the ID or name last
on the command line.
Description
The rmfcmap command deletes the specified mapping if the mapping is in the idle_or_copied or stopped
state. If it is in the stopped state, the -force parameter is required. If the mapping is in any other state,
you must stop the mapping before you can delete it.
Deleting a mapping only deletes the logical relationship between the two virtual disks; it does not affect
the virtual disks themselves. However, if you force the deletion, the target virtual disk (which might
contain inconsistent data) is brought back online.
If the target of the FlashCopy mapping is a member of the remote copy, the remote copy can be affected
in the following ways:
v If a stopped FlashCopy mapping is deleted and the I/O group associated with the FlashCopy mapping
is suspended while this delete is being processed, then all remote copy relationships associated with
the target volume of a the FlashCopy mapping that were active while the FlashCopy mapping was
copying can be corrupted. You must resynchronize them next time you start the system.
v If a stopped FlashCopy mapping that has previously failed to prepare is deleted, then all remote copy
relationships in the set of remote copy relationships associated with the target volume can be
corrupted. You must resynchronize them next time you start the system.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
rmfcmap testmap
startfcconsistgrp
Use the startfcconsistgrp command to start a FlashCopy consistency group of mappings. This
command makes a point-in-time copy of the source volumes at the moment that the command is started.
Syntax
startfcconsistgrp fc_consist_group_id
-prep -restore fc_consist_group_name
210 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Parameters
-prep
(Optional) Specifies that the designated FlashCopy consistency group be prepared prior to starting
the FlashCopy consistency group. A FlashCopy consistency group must be prepared before it can be
started. When you use this parameter, the system automatically issues the prestartfcconsistgrp
command for the group that you specify.
-restore
(Optional) Specifies the restore flag. When combined with the prep option, this forces the consistency
group to be prepared even if the target volume of one of the mappings in the consistency group is
being used as a source volume in another active mapping. An active mapping is in the copying,
suspended, or stopping state.
fc_consist_group_id | fc_consist_group_name
(Required) Specifies the ID or name of the consistency group mapping to start.
Description
This command starts a consistency group, which results in a point-in-time copy of the source volumes of
all mappings in the consistency group. You can combine the restore parameter with the prep parameter
to force the consistency group to be prepared prior to starting, even if the target volume of one or more
mappings in the consistency group is being used as a source volume of another active mapping. In this
case, the mapping is restoring as shown in the lsfcmap view. If the restore parameter is specified when
starting a consistency group where none of the target volumes are the source volume of another active
mapping, the parameter is ignored.
If a consistency group is started and the target volume of the mapping being started has up to four other
incremental FlashCopy mappings using the target, the incremental recording is left on. If there are more
than four other incremental FlashCopy mappings using the target volume, the incremental recording for
all of these mappings is turned off until they are restarted.
Note: The startfcconsistgrp command can take some time to process particularly if you have specified
the prep parameter. If you use the prep parameter, you give additional processing control to the system
because the system must prepare the mapping before the mapping is started. If the prepare process takes
too long, the system completes the prepare but does not start the consistency group. In this case, error
message CMMVC6209E displays. To control the processing times of the prestartfcconsistgrp and
startfcconsistgrp commands independently of each other, do not use the prep parameter. Instead, first
issue the prestartfcconsistgrp command, and then issue the startfcconsistgrp command to start the
copy.
This command is rejected if the target of the FlashCopy mapping in the specified consistency group is the
secondary volume in a Metro Mirror or Global Mirror relationship (so that the FlashCopy target is the
remote copy secondary).
Note: If the remote copy is idling or disconnected, even if the FlashCopy and remote copy are pointing
to the same volume, the auxiliary volume is not necessarily the secondary volume. In this case, you can
start a FlashCopy mapping.
The FlashCopy mapping also fails in the following cases, if the target of the FlashCopy mapping in the
specified consistency group is the primary volume in a Metro Mirror or Global Mirror relationship (so
that the FlashCopy target is the remote copy primary):
v The remote copy is active.
v The FlashCopy target (and remote copy primary target) volume is offline. If this occurs, the FlashCopy
mapping stops and the target volume remains offline.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
startfcmap
Use the startfcmap command to start a FlashCopy mapping. This command makes a point-in-time copy
of the source volume at the moment that the command is started.
Syntax
startfcmap fc_map_id
-prep -restore fc_map_name
Parameters
-prep
(Optional) Specifies that the designated mapping be prepared prior to starting the mapping. A
mapping must be prepared before it can be started. When you use this parameter, the system
automatically issues the prestartfcmap command for the group that you specify.
Note: If you have already used the prestartfcmap command, you cannot use the -prep parameter on
the startfcmap command; the command fails. However, if the FlashCopy has successfully prepared
before, the startfcmap command succeeds.
-restore
(Optional) Specifies the restore flag. When combined with the prep option, this forces the mapping to
be prepared even if the target volume is being used as a source volume in another active mapping.
An active mapping is in the copying, suspended, or stopping state.
fc_map_id | fc_map_name
Specifies the ID or name of the mapping to start.
Description
This command starts a single mapping, which results in a point-in-time copy of the source volume. You
can combine the restore parameter with the prep parameter to force the mapping to be prepared prior to
starting, even if the target volume is being used as a source volume of another active mapping. In this
case, the mapping is restoring as shown in the lsfcmap view. If the restore parameter is specified when
starting a mapping where the target volume is not the source volume of another active mapping, the
parameter is ignored and the mapping is not restoring as shown in the lsfcmap view.
If a mapping is started and the target volume of the mapping being started has up to four other
incremental FlashCopy mappings using the target, the incremental recording is left on. If there are more
than four other incremental FlashCopy mappings using the target volume, the incremental recording for
all of these mappings is turned off until they are restarted.
Note: The startfcmap command can take some time to start, particularly if you use the prep parameter.
If you use the prep parameter, you give additional starting control to the system. The system must
prepare the mapping before the mapping is started. To keep control when the mapping starts, you must
issue the prestartfcmap command before you issue the startfcmap command.
This command is rejected if the target of the FlashCopy mapping is the secondary volume in a Metro
Mirror or Global Mirror relationship (so that the FlashCopy target is the remote copy secondary).
212 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Note: If the remote copy is idling or disconnected, even if the FlashCopy and remote copy are pointing
to the same volume, the auxiliary volume is not necessarily the secondary volume. In this case, you can
start a FlashCopy mapping.
The FlashCopy mapping also fails in the following cases, if the target of the FlashCopy mapping is the
primary volume in a Metro Mirror or Global Mirror relationship (so that the FlashCopy target is the
remote copy primary):
v The remote copy is active.
v The FlashCopy target (and remote copy primary target) volume is offline. If this occurs, the FlashCopy
mapping stops and the target volume remains offline.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
startfcmap -prep 2
stopfcconsistgrp
Use the stopfcconsistgrp command to stop all processing that is associated with a FlashCopy
consistency group that is in one of the following processing states: prepared, copying, stopping, or
suspended.
Syntax
stopfcconsistgrp fc_consist_group_id_or_name
-force
-split
Parameters
-force
(Optional) Specifies that all processing that is associated with the mappings of the designated
consistency group be stopped immediately.
Note: When you use this parameter, all FlashCopy mappings that depend on the mappings in this
group (as listed by the lsfcmapdependentmaps command) are also stopped.
If the -force parameter is not specified, the command is rejected if the target volume of the
FlashCopy consistency group is the primary in a relationship that is mirroring I/O:
v consistent_synchronized
v consistent_copying
v inconsistent_copying
If the -force parameter is specified, any Metro Mirror or Global Mirror relationships associated with
the target volumes of the FlashCopy mappings in the specified consistency group stops. If a remote
copy relationship associated with the target was mirroring I/O when the map was copying, it might
lose its difference recording capability and require a full resychronization upon a subsequent restart.
-split
(Optional) Breaks the dependency on the source volumes of any mappings that are also dependent on
Description
This command stops a group of mappings in a consistency group. If the copy process is stopped, the
target disks become unusable unless they already contain complete images of the source. Disks that
contain complete images of the source have a progress of 100, as indicated in the lsfcmap command
output. The target volume is reported as offline if it does not contain a complete image. Before you can
access this volume, the group of mappings must be prepared and restarted.
If the consistency group is in the idle_or_copied state, the stopfcconsistgrp command has no effect and
the consistency group stays in the idle_or_copied state.
Note: Prior to SVC 4.2.0, the stopfcconsistgrp command always caused the consistency group to go to
the stopped state, taking the target volumes offline.
The split option can be used when all of the maps in the group have progress of 100. It removes the
dependency of any other maps on the source volumes. It might be used prior to starting another
FlashCopy consistency group whose target disks are the source disks of the mappings being stopped.
Once the consistency group has been stopped with the split option, the other consistency group could
then be started without the restore option.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
stopfcconsistgrp testmapone
stopfcmap
Use the stopfcmap command to stop all processing that is associated with a FlashCopy mapping that is in
one of the following processing states: prepared, copying, stopping, or suspended.
Syntax
stopfcmap fc_map_id_or_name
-force
-split
Parameters
-force
(Optional) Specifies that all processing that is associated with the designated mapping be stopped
immediately.
Note: When you use this parameter, all FlashCopy mappings that depend on this mapping (as listed
by the lsfcmapdependentmaps command) are also stopped.
If the -force parameter is not specified, the command is rejected if the target volume of the
FlashCopy mapping is the primary in a relationship which is mirroring I/O:
214 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
v consistent_synchronized
v consistent_copying
v inconsistent_copying
If the -force parameter is specified to a FlashCopy mapping whose target volume is also in a Metro
Mirror or Global Mirror relationship, the relationship stops. If a remote copy relationship associated
with the target was mirroring I/O when the map was copying, it might lose its difference recording
capability and require a full resychronization on a subsequent restart.
-split
(Optional) Breaks the dependency on the source volume of any mappings that are also dependent on
the target disk. This parameter can only be specified when stopping a map that has progress of 100
as shown by the lsfcmap command.
fc_map_id_or_name
(Required) Specifies the name or ID of the mapping to stop.
Description
This command stops a single mapping. If the copy process is stopped, the target disk becomes unusable
unless it already contained a complete image of the source (that is, unless the map had a progress of 100
as shown by the lsfcmap command). Before you can use the target disk, the mapping must once again be
prepared and then reprocessed (unless the target disk already contained a complete image).
Only stand-alone mappings can be stopped using the stopfcmap command. Mappings that belong to a
consistency group must be stopped using the stopfcconsistgrp command.
If the mapping is in the idle_or_copied state, the stopfcmap command has no effect and the mapping
stays in the idle_or_copied state.
Note: Before SAN Volume Controller 4.2.0, the stopfcmap command always changed the mapping state to
stopped and took the target volume offline. This change can break scripts that depend on the previous
behavior.
The split option can be used when the mapping has progress of 100. It removes the dependency of any
other mappings on the source volume. It might be used prior to starting another FlashCopy mapping
whose target disk is the source disk of the mapping being stopped. Once the mapping has been stopped
with the split option, the other mapping could then be started without the restore option.
Note: Maps that are rc_controlled are not shown in the view when this command is specified.
An invocation example
stopfcmap testmapone
addhostiogrp
Use the addhostiogrp command to map I/O groups to an existing host object.
Syntax
addhostiogrp -iogrp iogrp_list host_name
-iogrpall host_id
Parameters
-iogrp iogrp_list
(Required if you do not use -iogrpall) Specifies a colon-separated list of one or more I/O groups that
must be mapped to the host. You cannot use this parameter with the -iogrpall parameter.
-iogrpall
(Required if you do not use -iogrp) Specifies that all the I/O groups must be mapped to the specified
host. You cannot use this parameter with the -iogrp parameter.
host_id | host_name
(Required) Specifies the host to which the I/O groups must be mapped, either by ID or by name.
Description
This command allows you to map the list of I/O groups to the specified host object.
An invocation example
addhostiogrp -iogrpall testhost
addhostport
Use the addhostport command to add worldwide port names (WWPNs) or iSCSI names to an existing
host object.
Syntax
addhostport -hbawwpn wwpn_list host_name
-iscsiname iscsi_name_list -force host_id
Parameters
-hbawwpn wwpn_list
(Required if you do not use iscsiname) Specifies the list of Fibre Channel host ports to add to the
host. At least one worldwide port name (WWPN) or Internet Small Computer System Interface
(iSCSI) name must be specified. You cannot use this parameter with the iscsiname parameter.
Description
This command adds a list of host bust adapter (HBA) WWPNs or iSCSI names to the specified host
object. Any virtual disks that are mapped to this host object automatically map to the new ports.
Only WWPNs that are logged-in unconfigured can be added. For a list of candidate WWPNs, use the
lshbaportcandidate command.
Some HBA device drivers do not log in to the fabric until they can recognize target logical unit numbers
(LUNs). Because they do not log in, their WWPNs are not recognized as candidate ports. You can specify
the force parameter with the addhostport command to stop the validation of the WWPN list.
Note: When all I/O groups are removed from an iSCSI host, you cannot add a port to the iSCSI host
until you have mapped the iSCSI host to at least one I/O group. After mapping the iSCSI host to at least
one I/O group, resubmit the addhostport command. After adding the port to the host, you must create a
host authentication entry using the chhost command.
An invocation example
addhostport -hbawwpn 210100E08B251DD4 host_one
An invocation example
addhostport -iscsiname iqn.localhost.hostid.7f000001 mchost13
chhost
Use the chhost command to change the name or type of a host object. This does not affect any existing
virtual disk-to-host mappings.
Syntax
218 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
chhost
-type hpux -mask port_login_mask
tpgs
generic
openvms
host_name
-name new_name_arg -chapsecret chap_secret -nochapsecret host_id
Parameters
-type hpux | tpgs | generic | openvms
(Optional) Specifies the type of host: hpux, tpgs, generic, or openvms. The default is generic. The
tpgs parameter enables extra target-port unit attentions. Refer to SAN Volume Controller host
attachment documentation for more information on the hosts that require the type parameter.
-name new_name_arg
(Optional) Specifies the new name that you want to assign to the host object.
-mask port_login_mask
(Optional) Specifies which node target ports a host can access and the Fibre Channel (FC) port mask
for the host. Worldwide port names (WWPNs) in the host object must access volumes from the node
ports that are included in the mask and are in the host object's I/O group. The port mask is 64 binary
bits and is made up of a combination of 0's and 1's, where 0 indicates that the corresponding FC I/O
port cannot be used and 1 indicates that it can be used. The right-most bit in the mask corresponds to
FC I/O port 1. Valid mask values might range from 0000 (no ports enabled) to
1111111111111111111111111111111111111111111111111111111111111111 (all ports enabled). For example, a
mask of 111111101101 enables ports 1, 3, 4, 6, 7, 8, 9, 10, 11, and 12.
-chapsecret chap_secret
(Optional) Sets the Challenge Handshake Authentication Protocol (CHAP) secret used to authenticate
the host for iSCSI I/O. This secret is shared between the host and the cluster. The CHAP secret for
each host can be listed using the lsiscsiauth command.
-nochapsecret
(Optional) Clears any previously set CHAP secret for this host.
host_name | host_id
(Required) Specifies the host object to modify, either by ID or by current name.
Description
This command can change the name of the specified host to a new name, or it can change the type of
host. This command does not affect any of the current virtual disk-to-host mappings.
The port mask applies to logins from the host initiator port that are associated with the host object. For
each login between a host HBA port and node port, the node examines the port mask that is associated
with the host object for which the host HBA is a member and determines if access is allowed or denied.
If access is denied, the node responds to SCSI commands as if the HBA port is unknown.
Note: When all I/O groups are removed from an iSCSI host, the lsiscsiauth command does not display
the authentication entry for that host. Use the addhostiogrp command to map the iSCSI host to at least
one I/O group, and then use the addhostport command to add the iSCSI port into it. You must also add
authentication for that host using the chhost command with either the chapsecret or nochapsecret
parameter.
An invocation example
chhost -name testhostlode -mask 111111101101 hostone
An invocation example
chhost -type openvms 0
mkhost
Use the mkhost command to create a logical host object.
Syntax
mkhost -hbawwpn wwpn_list
-name new_name -iscsiname iscsi_name_list
-iogrp iogrp_list -mask port_login_mask -force
-type hpux
tpgs
generic
openvms
Parameters
-name new_name
(Optional) Specifies a name or label for the new host object.
-hbawwpn wwpn_list
(Required if you do not use iscsiname) Specifies one or more host bus adapter (HBA) worldwide
port names (WWPNs) to add to the specified host object. At least one WWPN or Internet Small
Computer System Interface (iSCSI) name must be specified. You cannot use this parameter with the
iscsiname parameter.
-iscsiname iscsi_name_list
(Required if you do not use hbawwpn) Specifies the comma-separated list of iSCSI names to add to the
host. At least one WWPN or iSCSI name must be specified. You cannot use this parameter with the
hbawwpn parameter.
-iogrp iogrp_list
(Optional) Specifies a set of one or more input/output (I/O) groups that the host can access the
VDisks (volumes) from. I/O groups are specified using their names or IDs, separated by a colon.
Names and IDs can be mixed in the list. If this parameter is not specified, the host is associated with
all I/O groups.
-mask port_login_mask
(Optional) Specifies which node target ports a host can access and the Fiber Channel (FC) port mask
for the host. Worldwide port names (WWPNs) in the host object must access volumes from the node
ports that are included in the mask and are in the host object's I/O group. The port mask is 64 binary
bits and is made up of a combination of 0's and 1's, where 0 indicates that the corresponding FC I/O
port cannot be used and 1 indicates that it can be used. The right-most bit in the mask corresponds to
FC I/O port 1. Valid mask values might range from 0000 (no ports enabled) to
1111111111111111111111111111111111111111111111111111111111111111 (all ports enabled). For example, a
mask of 111111101101 enables ports 1, 3, 4, 6, 7, 8, 9, 10, 11, and 12.
220 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
-force
(Optional) Specifies that a logical host object be created without validation of the WWPNs.
-type hpux | tpgs | generic | openvms
(Optional) Specifies the type of host. The default is generic. The tpgs parameter enables extra
target-port unit attentions. Refer to SAN Volume Controller host attachment documentation for more
information on the hosts that require the type parameter.
Description
The mkhost command associates one or more HBA WWPNs or iSCSI names with a logical host object.
This command creates a new host. The ID is displayed when the command completes. You can
subsequently use this object when you map virtual disks to hosts by using the mkvdiskhostmap command.
Issue the mkhost command only once. The cluster scans the fabric for WWPNs in the host zone. The
cluster itself cannot filter into the hosts to determine which WWPNs are in which hosts. Therefore, you
must use the mkhost command to identify the hosts.
After you identify the hosts, mappings are created between hosts and virtual disks. These mappings
effectively present the virtual disks to the hosts to which they are mapped. All WWPNs in the host object
are mapped to the virtual disks.
Some HBA device drivers do not log in to the fabric until they can see target logical unit numbers
(LUNs). Because they do not log in, their WWPNs are not recognized as candidate ports. You can specify
the force parameter with this command to stop the validation of the WWPN list.
This command fails if you add the host to an I/O group that is associated with more host ports or host
objects than is allowed by the limits within the cluster.
An invocation example
mkhost -name hostone -hbawwpn 210100E08B251DD4 -force -mask 111111101101
An invocation example
mkhost -iscsiname iqn.localhost.hostid.7f000001 -name newhost
An invocation example
mkhost -hbawwpn 10000000C92BB490 -type openvms
rmhost
Use the rmhost command to delete a host object.
Parameters
-force
(Optional) Specifies that you want the system to delete the host object even if mappings still exist
between this host and virtual disks (VDisks). When the -force parameter is specified, the mappings
are deleted before the host object is deleted.
host_name | host_id
(Required) Specifies the host object to delete, either by ID or by name.
Description
The rmhost command deletes the logical host object. The WWPNs that were contained by this host object
(if it is still connected and logged in to the fabric) are returned to the unconfigured state. When you issue
the lshbaportcandidate command, the host objects are listed as candidate ports.
If any mappings still exist between this host and virtual disks, the command fails unless you specify the
-force parameter. When the -force parameter is specified, the rmhost command deletes the mappings
before the host object is deleted.
An invocation example
rmhost host_one
rmhostiogrp
Use the rmhostiogrp command to to delete mappings between one or more input/output (I/O) groups
and a specified host object.
Syntax
rmhostiogrp -iogrp iogrp_list host_name
-iogrpall -force host_id
Parameters
-iogrp iogrp_list
(Required) Specifies a set of one or more I/O group mappings that will be deleted from the host. You
cannot use this parameter with the -iogrpall parameter.
-iogrpall
(Optional) Specifies that all the I/O group mappings that are associated with the specified host must
be deleted from the host. You cannot use this parameter with the -iogrp parameter.
-force
(Optional) Specifies that you want the system to remove the specified I/O group mappings on the
host even if the removal of a host to I/O group mapping results in the loss of VDisk-to-host
mappings (host mappings).
222 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
host_id | host_name
(Required) Specifies the identity of the host either by ID or name from which the I/O group
mappings must be deleted.
Description
The rmhostiogrp command deletes the mappings between the list of I/O groups and the specified host
object.
If a host is defined in two I/O groups, and has access to a volume through both I/O groups, an attempt
to remove the host from just one of those I/O groups fails, even with -force specified. To resolve this
problem, do one of the following:
v Delete the host mappings that are causing the error
v Delete the volumes or the host
Note: When all I/O groups are removed from an Internet Small Computer System Interface (iSCSI) host,
and you want to add an iSCSI port to the host, refer to the addhostport and chhost commands.
An invocation example
rmhostiogrp -iogrp 1:2 host0
rmhostport
Use the rmhostport command to delete worldwide port names (WWPNs) or iSCSI names from an
existing host object.
Syntax
rmhostport -hbawwpn wwpn_list host_name
-iscsiname iscsi_name_list -force host_id
Parameters
-hbawwpn wwpn_list
(Required if you do not use iscsiname) Specifies the list of Fibre Channel host ports to delete from
the host. At least one WWPN or iSCSI name must be specified. You cannot use this parameter with
the iscsiname parameter.
-iscsiname iscsi_name_list
(Required if you do not use hbawwpn) Specifies the comma-separated list of iSCSI names to delete
from the host. At least one WWPN or iSCSI name must be specified. You cannot use this parameter
with the hbawwpn parameter.
-force
(Optional) Forces the deletion of the specified ports. This overrides the check that all of the WWPNs
or iSCSI names in the list are mapped to the host specified.
Important: Using the force parameter might result in a loss of access. Use it only under the direction
of the IBM Support Center.
host_name | host_id
(Required) Specifies the host name or the host ID.
This command deletes the list of HBA WWPNs or iSCSI names from the specified host object. If the
WWPN ports are still logged in to the fabric, they become unconfigured and are listed as candidate
WWPNs. See also the lshbaportcandidate command.
Any virtual disks that are mapped to this host object are automatically unmapped from the ports.
Replacing an HBA in a host: List the candidate HBA ports by issuing the lshbaportcandidate command.
A list of the HBA ports that are available to be added to host objects is displayed. One or more of these
ports corresponds with one or more WWPNs that belong to the new HBA. Locate the host object that
corresponds to the host in which you have replaced the HBA. The following command lists all the
defined host objects:
lshost
To list the WWPNs that are currently assigned to the host, issue the following:
lshost hostobjectname
Add the new ports to the existing host object by issuing the following command:
where one or more existing WWPNs separated by : and hostobjectname/id correspond to those values listed in
the previous steps.
Remove the old ports from the host object by issuing the following command:
where one or more existing WWPNs separated by : corresponds with those WWPNs that are listed in the
previous step that belong to the old HBA that has been replaced. Any mappings that exist between the
host object and VDisks are automatically applied to the new WWPNs. Therefore, the host recognizes that
the VDisks are the same SCSI LUNs as before. See the Multipath Subsystem Device Driver: User's Guide for
additional information about dynamic reconfiguration.
An invocation example
rmhostport -hbawwpn 210100E08B251DD4 host_one
An invocation example
rmhostport -iscsiname iqn.localhost.hostid.7f000001 mchost13
224 SAN Volume Controller and Storwize V7000: Command-Line Interface User's Guide
Chapter 17. Information commands
The information commands enable you display specific types of SAN Volume Controller information.
These commands return no output but exit successfully when there is no information to display.
Note: IDs are assigned at run-time by the system and cannot be relied upon to be the same after
configuration restoration. Therefore, use object names instead of IDs whenever possible.
ls2145dumps (Deprecated)
Attention: The ls2145dumps command is deprecated. Use the lsdumps command to display a list of files
in a particular dumps directory.
lscimomdumps (Deprecated)
Attention: The lscimomdumps command is deprecated. Use the lsdumps command to display a list of files
in a particular dumps directory.
lscopystatus
Use the lscopystatus command to determine whether any file copies are currently in progress.
Syntax
lscopystatus
-nohdr -delim delimiter
Parameters
-nohdr
(Optional) By default, headings are displayed for each column of data in a concise style view, and for
each item of data in a detailed style view. The -nohdr parameter suppresses the display of these
headings.
Description
This command displays an indicator that shows if a file copy is currently in progress. Only one file can
be copied in the cluster at a time.
An invocation example
lsclustercandidate
Attention