O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Accelerate with IBM Storage
© Copyright IBM Corporation 2015
Technical University/Symposia materials may not be reproduced...
Agenda
• High Availability vs Disaster Recovery
• Overview of HyperSwap Function
• Overview of Demo Lab Setup
• Outline of...
High Availability vs Disaster Recovery
Site 1
HA
Site 2
DR
© Copyright IBM Corporation 2015
ISL 1
Volume Mirroring Metro M...

Vídeos do YouTube não são mais aceitos pelo SlideShare

Visualizar original no YouTube

Confira estes a seguir

1 de 29 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Quem viu também gostou (12)

Anúncio

Semelhante a Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive (20)

Mais de xKinAnx (20)

Anúncio

Mais recentes (20)

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive deep dive

  1. 1. Accelerate with IBM Storage © Copyright IBM Corporation 2015 Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM. IBM Spectrum Virtualize HyperSwap Deep Dive Bill Wiegand Spectrum Virtualize – Consulting IT Specialist IBM
  2. 2. Agenda • High Availability vs Disaster Recovery • Overview of HyperSwap Function • Overview of Demo Lab Setup • Outline of Steps and Commands to Configure HyperSwap • Show Host View of Its Storage • Demo Scenario 1 © Copyright IBM Corporation 2015 • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 1
  3. 3. High Availability vs Disaster Recovery Site 1 HA Site 2 DR © Copyright IBM Corporation 2015 ISL 1 Volume Mirroring Metro Mirror or Global Mirror Cluster 2Cluster 1 ISL 2 Manual intervention required: 1.Stop all running servers 2.Perform failover operations 3.Remove server access in Site 1 4.Grant server access in Site 2 5.Start the servers in Site 2 6.Import Volume Groups 7.Vary on Volume Groups 8.Mount Filesystems 9.Recover applications 2
  4. 4. Today: SVC Enhanced Stretched Cluster • Today’s stretched cluster technology splits an SVC’s two-way cache across two sites • Allows host I/O to continue without loss of access to data if a site is lost • Enhanced Stretched Cluster in version 7.2 introduced site concept to the code for policing configurations and optimizing data flow © Copyright IBM Corporation 2015 Quorum storage Power domain 3 Node 2 Power domain 2 Storage Switch Host Node 1 Power domain 1 Storage Switch Host Read Read Write 3
  5. 5. HyperSwap • HyperSwap is next step of HA (High Availability) solution • Provides most disaster recovery (DR) benefits of Metro Mirror as well • Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities along with existing change volume and access I/O group technologies • Essentially makes a host’s volumes accessible across two Storwize or SVC I/O groups in a clustered system by making the primary and secondary volumes of the Metro Mirror relationship, running under the © Copyright IBM Corporation 2015 secondary volumes of the Metro Mirror relationship, running under the covers, look like one volume to the host 4
  6. 6. High Availability with HyperSwap • Hosts, SVC nodes, and storage are in one of two failure domains/sites • Volumes visible as a single object across both sites (I/O groups) HostA HostB Site 1 Site 2 Vol-1p Vol-2p © Copyright IBM Corporation 2015 I/O group 0 Node 1 Node 2 I/O group 1 Node 3 Node 4 Vol-1p Vol-2pVol-1sVol-2s 5
  7. 7. High Availability with HyperSwap Site 1 Site 2 Host A Host B Clustered Host C Public Fabric 1A Public Fabric 2A Public ISL IBM Spectrum Virtualize system IBM Spectrum Virtualize system Hosts’ ports can be • Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically configured to use correct paths. • Zoned only locally to simplify configuration, which only loses the ability for a host on one site to continue in the absence of local IBM Spectrum Virtualize system nodes Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap: • Private SAN for node-to-node communication • Public SAN for everything else See Redbook SG24-8211-00 for more details © Copyright IBM Corporation 2015 Public Fabric 1B Public Fabric 2B Storage Storage Site 3 Quorum Private ISL Private Fabric 1 Private Fabric 2 6 Public ISL continue in the absence of local IBM Spectrum Virtualize system nodes Storage Systems can be • IBM SVC for either HyperSwap or Enhanced Stretched Cluster • IBM Storwize V5000, V7000 for HyperSwap only Quorum provided by SCSI controller marked with “Extended Quorum support” on the interoperability matrix. Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes. Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched and hyperswap topologies only, excluding quorum storage)
  8. 8. HyperSwap – What is a Failure Domain • Generally a failure domain will represent a physical location, but depends on what type of failure you are trying to protect against • Could all be in one building on different floors/rooms or just different power domains in same data center • Could be multiple buildings on the same © Copyright IBM Corporation 2015 • Could be multiple buildings on the same campus • Could be multiple buildings up to 300KM apart • Key is the quorum disk • If only have two physical sites and quorum disk to be in one of them then some failure scenarios won’t allow cluster to survive automatically • Minimum is to have active quorum disk system on separate power grid in one of the two failure domains 7
  9. 9. HyperSwap – Overview • Stretched Cluster requires splitting nodes in an I/O group • Impossible with Storwize family since an I/O group is confined to an enclosure • After a site fails write cache is disabled • Could affect performance • HyperSwap keeps nodes in an I/O group together • Copies data between two I/O groups • Suitable for Storwize family of products as well as SVC © Copyright IBM Corporation 2015 • Retains full read/write performance with only one site 8
  10. 10. HyperSwap – Overview • SVC Stretched Cluster is not application aware • If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t pause at the same point, likely making the site’s data unusable for disaster recovery • HyperSwap allows grouping of multiple volumes together in a consistency group • Data will be maintained consistently across the volumes • Significantly improves the use of HyperSwap for disaster recovery scenarios as well • There is no remote copy partnership configuration since this is a single © Copyright IBM Corporation 2015 • There is no remote copy partnership configuration since this is a single clustered system • Intra-cluster replication initial sync and resync rates can be configured normally using the ‘chpartnership’ CLI command 9
  11. 11. HyperSwap – Overview • Stretched Cluster discards old data during resynchronization • If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data isn’t available for disaster recovery, giving windows where both sites are online but loss of one site could lose data • HyperSwap uses Global Mirror with Change Volumes technology to retain the old data during resynchronization • Allows a site to continually provide disaster recovery protection throughout its lifecycle • Stretched cluster did not know which sites hosts were in © Copyright IBM Corporation 2015 • Stretched cluster did not know which sites hosts were in • To minimize I/O traffic across sites more complex zoning and management of preferred nodes for volumes was required • Can use HyperSwap function on any Storwize family system supporting multiple I/O groups • Two Storwize V5000 control enclosures • Two-four Storwize V7000 Gen1/Gen2 control enclosures • Four-eight SVC node cluster • Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered 10
  12. 12. HyperSwap – Overview • Limits and Restrictions • Max of 1024 HyperSwap volumes per cluster • Each HyperSwap volume requires four FC mappings and max mappings is 4096 • Max capacity is 1PB per I/O group or 2PB per cluster • Much lower limit for Gen1 Storwize V7000 • Run into limit of remote copy bitmap space • Can’t replicate HyperSwap volumes to another cluster for DR using remote copy • Limited FlashCopy Manager support • Can’t do reverse flashcopy to HyperSwap volumes • Max of 8 paths per HyperSwap volume same as regular volume © Copyright IBM Corporation 2015 • Max of 8 paths per HyperSwap volume same as regular volume • AIX LPM not supported today • No GUI support currently • Requirements • Remote copy license • For Storwize configurations an external virtualization license is required • Minimum one enclosure license for the storage system providing active quorum disk • Size public/private SANs as we do with ESC today • Only applicable if using ISLs between sites/IO groups • Recommended Use Cases • Active/Passive site configuration • Hosts access given volumes from one site only 11
  13. 13. Example Configuration IOGroup-0 IOGroup-1 Local Host Vol-1 HyperSwap Volume Primary Secondary Federated Host Federated Host SVC SVC © Copyright IBM Corporation 2015 12 EMC EMC IBM IBM 2TB HPHP 3TB IBMIBM
  14. 14. Local Host Connectivity Local Host Fab-A Fab-B 2 HBA’s 4 Path’s SVC © Copyright IBM Corporation 2015 13 IOGroup-0 IOGroup-1 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM SVC SVC SVC
  15. 15. Federated Host Connectivity Federated Host Fab-B Fab-A 2 HBA’s 8 Path’s © Copyright IBM Corporation 2015 14 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM SVC SVC SVC SVC
  16. 16. Storage Connectivity IOGroup-0 IOGroup-1 Fab-A Fab-B 2 2 2 2 SVC SVC SVC SVC © Copyright IBM Corporation 2015 15 Storage Controller 2 2
  17. 17. HyperSwap – Understanding Quorum Disks • By default clustered system selects three quorum disk candidates automatically • With SVC it is on the first three MDisks it discovers from any supported disk controller • On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first three MDisks discovered • When cluster topology is set to “hyperswap” the quorum disks are dynamically changed for proper configuration for a HyperSwap enabled clustered system © Copyright IBM Corporation 2015 clustered system • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • There is only ever one active quorum disk • Used solely for tie-break situations when two sites loss access to each other • Must be on externally virtualized storage that supports Extended Quorum • The three are used to store critical cluster configuration data 16
  18. 18. • Quorum disk configuration not exposed in GUI • ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is currently the active one • No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster • Active quorum disk must be external and on a storage system that supports “Extended Quorum” as noted on support matrix • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741 HyperSwap – Understanding Quorum Disks © Copyright IBM Corporation 2015 • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658 • Only certain IBM disk systems support extended quorum 17
  19. 19. HyperSwap – Lab Setup Expansion Expansion Host Volume • A HyperSwap clustered system provides high availability between different sites or within same data center • I/O Group assigned to each site • A copy of the data is at each site • Host associated with a site • If you lose access to I/O Group 0 © Copyright IBM Corporation 2015 Storwize V7000 Clustered System I/O Group 0 Control Enclosure Enclosures Expansion Enclosures Storwize V7000 Clustered System I/O Group 1 Control Enclosure Enclosures Expansion Enclosures Site 1 Site 2 Clustered System Separated at distance • If you lose access to I/O Group 0 from the host then the host multi- pathing will automatically access the data via I/O Group 1 • If you only lose primary copy of data then HyperSwap function will forward request to I/O Group 1 to service I/O • If you lose I/O Group 0 entirely then the host multi-pathing will automatically access the other copy of the data on I/O Group 1 18
  20. 20. HyperSwap – Configuration • NAMING THE 3 DIFFERENT SITES: • IBM_Storwize:ATS_OXFORD3:superuser> lssite id site_name 1 Site1 2 Site2 3 Site3 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3 © Copyright IBM Corporation 2015 • LIST THE 4 CLUSTER NODES: • IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim : id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number 1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30- 1:30:1:78G00PV 2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30- 2:30:2:78G00PV 3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50- 1:50:1:78REBAX 4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50- 2:50:2:78REBAX 19
  21. 21. HyperSwap – Configuration • ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4 • ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX): © Copyright IBM Corporation 2015 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1 • ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE: • IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A 20
  22. 22. HyperSwap – Configuration • LIST QUORUM LOCATIONS: • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • DEFINE TOPOLOGY: © Copyright IBM Corporation 2015 • DEFINE TOPOLOGY: • IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap 21
  23. 23. HyperSwap – Configuration • MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • Virtual Disk, id [2], successfully created • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL © Copyright IBM Corporation 2015 -iogrp 1 -mdiskgrp GBURG-05_POOL • MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand 22
  24. 24. HyperSwap – Configuration • ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE (IOGRP1): • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20 • DEFINE CONSISTENCY GROUP : • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP © Copyright IBM Corporation 2015 • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP • DEFINE THE TWO REMOTE COPY RELATIONSHIPS: • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp GBURG_CONGRP • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp GBURG_CONGRP 23
  25. 25. HyperSwap – Configuration • ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED: • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20 VOL20REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20 VOL20REL © Copyright IBM Corporation 2015 • At this point the replication between master and aux volumes starts automatically • Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes are in sync, then state changes to “consistent synchronized” • MAP HYPERSWAP VOLUMES TO HOST: • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20 ** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the Metro Mirror relationship created earlier 24
  26. 26. HyperSwap – Configuration © Copyright IBM Corporation 2015 25
  27. 27. Demonstration • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 © Copyright IBM Corporation 2015 • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 26
  28. 28. Miscellaneous • Recommended to use 8 FC ports per node canister so we can dedicate some ports strictly for the synchronous mirroring between the IO groups • Link to HyperSwap whitepaper in Techdocs • https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538 © Copyright IBM Corporation 2015 27

×