O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

VMworld Europe 2014: Virtual SAN Best Practices and Use Cases

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 50 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a VMworld Europe 2014: Virtual SAN Best Practices and Use Cases (20)

Anúncio

Mais de VMworld (20)

Mais recentes (20)

Anúncio

VMworld Europe 2014: Virtual SAN Best Practices and Use Cases

  1. 1. Disclaimer • This presentation may contain product features that are currently under development. • This overview of new technology represents no commitment from VMware to deliver these features in any generally available product. • Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. • Technical feasibility and market demand will affect final delivery. • Pricing and packaging for any new technologies or features discussed or presented have not been determined. CONFIDENTIAL 2
  2. 2. 3 1 Introduction to VMware Virtual SAN 2 Use Case Overview 3 Use Cases Characteristics & Sizing Considerations 5 Best Practices 6 Q & A – We’d love to hear about your use cases and recommendations Agenda
  3. 3. Not Covered Hardware Best Practices: STO1211 - Virtual SAN Ready Node and Hardware Guidance for Hypervisor Converged Infrastructure Architecture Deep Dive: STO1279 - Virtual SAN Architecture Deep Dive Troubleshooting: STO3098 - Virtual SAN Best Practices for Monitoring and Troubleshooting
  4. 4. The Software-Defined Data Center Transform storage by aligning it with app demands Management tools give way to automation Expand virtual compute to all applications Virtualize the network for speed and efficiency 5
  5. 5. The Software-Defined Data Center Transform storage by aligning it with app demands 6
  6. 6. VMware Software-Defined Storage Bringing the Efficient Operational Model of Virtualization to Storage Policy-driven Control Plane Virtual Data Plane Virtual Datastores PerformanceMobilityData Protection Virtual Data Services Cloud Object Storage SAN / NAS x86 Servers 7
  7. 7. VMware Virtual SAN Introduction
  8. 8. VMware Virtual SAN Introduction Hypervisor-Converged Storage Platform vSphere + Virtual SAN … • Software-defined storage software solution. • Design to aggregates locally attached storage from each ESXi host in a cluster. • Hybrid disk storage solution – Magnetic disks (HDD) – Flash based disks (SSD) • VM-Centric data operations and policy driven management principles. • Resilient design based on a Distributed RAID architecture – No single points of failures • Dynamic capacity and performance scalability
  9. 9. VMware Virtual SAN Use Case and Best Practices
  10. 10. 11 Management Clusters Use Cases Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 NewDMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage ProdNew SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  11. 11. 12 Management Clusters Use Cases Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  12. 12. Virtual Desktop Infrastructure Key Requirements • Handle peak performance requirements: – boot storms – login storms – read/write • Support High VDI density • Linear Scalable Capabilities – Scale Up – Scale Out • Cost • Supportability for all types of desktops Horizon
  13. 13. Virtual Desktop Infrastructure Advantages disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD scaleup scale out • Storage for VDI is contained within a single server • Allows granular scaling – Add one server with SSD & Disk to scale out • Deployment Simplicity – Eliminates need for extensive storage design and sizing • Accelerate Deployment - Customers can quickly go from POC to Pilot to Production
  14. 14. Virtual Desktop Infrastructure Best Practices 1. Desktop Sizing 2. VSAN VM Storage Policy Definition 3. Size the cluster 4. Size the host Horizon Best Practices
  15. 15. 1. Desktop Sizing • Class of worker and type of pool – Knowledge worker • Dedicated full clone • Dedicated linked clone – Task worker • Floating linked clone • VM instance definition – OS – CPU, memory, storage,network • # of desktops per cluster • Desired consolidation ratio
  16. 16. 2. Virtual SAN Policies • Failures to Tolerate – For n failures, n+1 copies and 2n+1 hosts – FTT has the greatest impact on capacity utilization in the cluster – Set default FTT value = 1; For linked clone floating pools consider setting FTT=0 • Object space reservation – Default is 0 – For full clone, consider setting this to 100%, as it guarantees deterministic placement of desktops • Flash Read Cache reservation – Typically default is 0 – Consider setting it to 10% for the replica disk in linked-clone floating pools • Number of disk stripes per host – Leverage the default value of 1 for optimal performance Horizon 6.0 automatically sets the recommended values for policies
  17. 17. Horizon View – Default Storage Policies • Horizon View automatically creates default storage policies based on Pool type REPLICA_DISK PERSISTENT_DISK OS_DISK VM_HOME Stripes 1 1 1 1 Resiliency 1 1 1 1 Space Reservation 0% Thin 100% Thick 0% Thin 0% Thin Read Cache Reservation 10% 0% 0% 0%
  18. 18. 3. Sizing the cluster • Cluster Size – (# of Desktops/desired consolidation ratio per host) + 1 – Keep head room for host failure • Datastore capacity – Usable: Capacity/Desktop * # of Desktop’s per cluster – Raw: Usable* (FTT+1) * 130% • Flash Capacity – 10% or more usable datastore capacity • Keep VSAN component count below max limit (3000/host)
  19. 19. 4. Sizing the Host • CPU – Average # of vCPU’s per desktop – CPU utilization per desktop – Additional CPU for View – up to 15% – Additional CPU for VSAN – up to 10% • Memory – Memory allocated per desktop – Video memory overhead - # of monitors and monitor resolution per desktop – View Accelerator – up to 2GB – Additional CPU for VSAN – up to 10% • Network – Dedicated redundant 10GbE uplinks
  20. 20. Sizing the host (contd) • Magnetic Disk – Linked clone: 10K RPM or better – Full clone: 7.2K or 10K RPM drives • Disk groups – Consider multiple disk groups per host – improved performance, smaller failure domains and quicker rebuilds – Linked clones: 1 disk group – Full clones: 2 disk groups
  21. 21. Sizing Example Desktops Projected Desktops 1000 VM storage 40 GB Type of desktop Linked Clone Virtual SAN VM storage policy Failures to tolerate 1 Flash Read Cache Reservation 10% (for replica disk) 0% for rest Object Space Reservation 0% Host Sizing Capacity/host ~10TB # of disk groups/host 2 HDD/host 10 x 900GB 10KRPM SSD/host 2 x 200 GB Cluster Sizing # of hosts 10+1 # of VM’s/host 91 Total usable capacity 40TB Total raw capacity 40TB*2*130%=104TB Total Flash Capacity 4TB 1 2 3 4
  22. 22. Virtual Desktop Infrastructure Ready Nodes – Node and Service Profiles Linked Clones Full Clones High Medium # of VM per Node Up to 100 Up to 100 Up to 60 Up to 30 IOPS per Node Up to 10K Up to 10K Up to 20K Up to 12K Raw Capacity Per Node 1.2TB 10.8TB 14.4 TB 8TB CPU 2x10 core 2x10 core 2x10 core 2x10 core Memory 256 GB 256 GB 384 GB 256 GB HDD 4x300GB SAS 15K RPM 12x900GB SAS 10K 12x12TB SAS 10K RPM 8x1TB NL-SAS 7.2K RPM SSD 1X400GB SSD (Class E) 2x400GB SSD (Class E) 2x400GB SSD (Class E) 2x200GB SSD (Class D) IO Controller Queue Depth >=256 Queue Depth >=256 Queue Depth >=512 Queue Depth >=256 NIC 10GbE 10GbE 10GbE 10GbE Ready Node ProfileService Profile
  23. 23. 24 Management Clusters Use Cases Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  24. 24. Remote Office / Branch Office • Small number of virtual machines and capacity – 10 to 15 virtual machines – Apps include – file and print server, domain controller, small development environment – Storage capacity less than 5 Terabyte's – Centralized management – Limited or no remote IT support on site – Smallest footprint possible – High Availability – Automated failover – Data protection to datacenter Virtual SAN Key Requirements
  25. 25. Remote Office / Branch Office Host Design – Typically 3 1-CPU node cluster – provides availability while maintaining costs! – 1G network is OK • vCenter deployed at data center or as a Virtual Appliance on Virtual SAN • High availability and automated failover • VDP/VR for asynchronous replication Advantages disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD
  26. 26. Remote Office / Branch Office Best Practices Best Practices Virtual SAN Avoid possible performance and recoverability issues – Do not exceed the number of recommended virtual machines on a 1G network VM Storage Policy Definition – Create a VM Storage Policy with the following settings: • Number of failures to tolerate 1 (FTT=1) • Object Disk Stripe-width 1 (ODSW=1)
  27. 27. Remote Office / Branch Office Low # of VM per Node Up to 15 IOPS per Node Up to 2K Raw Capacity Per Node 5TB CPU 1x6 CPU cores Memory 64 GB HDD 5x1TB NL-SAS 7.2K RPM SSD 1x200GB SSD (Class B or above) IO Controller Queue Depth >= 256 NIC 1GbE Ready Nodes Ready Node Profiles
  28. 28. 29 Management Clusters Use Cases Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  29. 29. DMZ | Isolated Key Requirements • Complete resource access isolation (Air Gap) – Logical – Physical • Regulatory compliance – PCI – HIPPA – Others • High Availably with multiple levels of redundancy • RPO - 30 Minutes vSphere + Virtual SAN … Regulatory Compliance
  30. 30. DMZ | Isolation Best Practices disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD scaleup scale out • Set up separate VSAN VLANs for your DMZ network • vCenter can be shared between other VSAN clusters • Deployment Simplicity – Follow sizing principles covered elsewhere in this presentation
  31. 31. DMZ | Isolated Ready Nodes High Medium Up to 60 Up to 30 Up to 20K Up to 12K 14.4 TB 8TB 2x10 core 2x10 core 384 GB 256 GB 12.12TB SAS 10K RPM 8x1TB NL-SAS 7.2K RPM 2x400GB SSD (Class E) 2x200GB SSD (Class D) Queue Depth >=512 Queue Depth >=256 Ready Node Profile
  32. 32. Use Cases 34 Management Clusters Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  33. 33. Management Cluster Key Requirements • Minimum of 3 ESXi Hosts • Support all necessary infrastructure application and components • Separate infrastructure application and components from resources utilized by production workloads • Improve manageability of infrastructure Management Cluster
  34. 34. Management Cluster Advantages • Storage Managed by vSphere Admin • Eliminate resource contention between production workloads and infrastructure components • Support infrastructure application • Flexible Scalable Capabilities – Scale Up – Scale Out • Optimize performance of resource intensive management application – vCenter Server – vCenter Operations – Etc. disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD scaleup scale out
  35. 35. Management Cluster Best Practices Management Cluster Best Practices • Desktop Sizing – Make sure the # of VM’s/host is below the supported limit of 100 • VM Storage Policy – Consider the use of object space reservation to sustain performance during data management operations – Flash Read Cache reservation: Typically set to 0; one can consider reserving capacity for the base image • Host and Disk Groups – Consider the use of 4 nodes – Consider configuring multiple disks groups
  36. 36. Management Cluster Ready Nodes High Medium Up to 60 Up to 30 Up to 20K Up to 12K 14.4 TB 8TB 2x10 core 2x10 core 384 GB 256 GB 12.12TB SAS 10K RPM 8x1TB NL-SAS 7.2K RPM 2x400GB SSD (Class E) 2x200GB SSD (Class D) Queue Depth >=512 Queue Depth >=256 Ready Node Profiles
  37. 37. Use Cases 39 Management Clusters Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  38. 38. Backup and DR Target Key Requirements Self-service development environment with: • Application lifecycle management • Separate and isolated development execution zones – Compute – Storage – Network • Accelerated workload deployments • Performing, scalable and secure dedicated infrastructure • Policy based governance with automated delivery • Fully automated Policy migration across execution zones • Interoperable with vSphere and third party complementary solutions Backup and DR Target Site A Site B
  39. 39. Backup and DR Target Advantages Flexible DR capabilities: • Support Virtual SAN as a target destination from any array • VM Centric based protection • Guaranteed storage requirements characteristics through VM Storage Policy – Availability – Performance – Capacity • Integrate with vCenter Site Recovery Manager for extensive capabilities: – Automated DR operation & orchestration – Automated failover – execution of user defined plans – Automated failback – reverse original recovery plan – Planned migrations – ensure zero data loss – Point-in-time-recovery – multiple recovery points – Non-disruptive test – automate test on isolated infrastructures Backup and DR Target Site A Site B
  40. 40. Backup and DR Target Best Practices Best Practices Keep track of component counts when using VR point in time recovery. – Plan cluster size and component count appropriately based on recovery configuration and requirements – Maintain consistent Virtual SAN VM storage policies across sites when target and destination are both Virtual SAN Backup and DR Target Site A Site B
  41. 41. Backup and DR Target Ready Nodes High Medium Up to 60 Up to 30 Up to 20K Up to 12K 14.4 TB 8TB 2x10 core 2x10 core 384 GB 256 GB 12.12TB SAS 10K RPM 8x1TB NL-SAS 7.2K RPM 2x400GB SSD (Class E) 2x200GB SSD (Class D) Queue Depth >=512 Queue Depth >=256 Ready Node Profiles
  42. 42. Use Cases 44 Management Clusters Test / Dev / Staging Private cloud ROBO Virtual SAN Backup and DR Target Site A Site B vSphere + Virtual SAN … Production Tier 2 / Tier 3 DMZ / Isolated vSphere + Virtual SAN … Regulatory Compliance vSphere + Virtual SAN Dev Test Stage Prod SDLC Virtual Desktop vSphere + Virtual SAN … Horizon
  43. 43. Software Development Lifecycle Key Requirements Dev Test Prod Stage Self-service development environment with: • Application lifecycle management • Separate and isolated development execution zones – Compute – Storage – Network • Accelerated workload deployments • Performing, scalable and secure dedicated infrastructure • Policy based governance with automated delivery • Fully automated Policy migration across execution zones • Interoperable with vSphere and third party complementary solutions
  44. 44. Software Development Lifecycle Advantages Dev Test Prod Stage Flexible service offerings: • High Availability (tolerate failures) and Performance for demanding workloads • Fully automated Policy migration across execution zones • Ability to manage service offerings and automated migration capabilities on a per VM basis • Better linear scalability options – Scale capacity and IOPS together – Avoid Storage IO hot spots • Accelerated workload deployments • Policy based governance with automated delivery • Interoperable with vSphere and third party complementary solutions
  45. 45. Software Development Lifecycle Best Practices Dev Test Prod Stage Best Practices Leverage the vCloud Automation Center Storage Policy-Based Management integration • Define default policies per Virtual SAN cluster and execution zones • Size Virtual SAN cluster capacity and acceleration layers suitable to the different requirements per execution zones Implement approval procedures for: • Virtual machine allocation • Migration across execution zones
  46. 46. Software Development Lifecycle Ready Nodes High Medium Up to 60 Up to 30 Up to 20K Up to 12K 14.4 TB 8TB 2x10 core 2x10 core 384 GB 256 GB 12.12TB SAS 10K RPM 8x1TB NL-SAS 7.2K RPM 2x400GB SSD (Class E) 2x200GB SSD (Class D) Queue Depth >=512 Queue Depth >=256 Ready Node Profiles
  47. 47. Summary Virtual SAN is suitable but not limited to the presented use cases Always follow the recommended guidance for best results and supportability We’re here to help you size your environment!

×