O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

VMware VSAN Technical Deep Dive - March 2014

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
VSAN-VMWorld2015-Rev08
VSAN-VMWorld2015-Rev08
Carregando em…3
×

Confira estes a seguir

1 de 46 Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a VMware VSAN Technical Deep Dive - March 2014 (20)

Anúncio

Mais recentes (20)

VMware VSAN Technical Deep Dive - March 2014

  1. 1. © 2014 VMware Inc. All rights reserved. VMware Virtual SAN 5.5 Technical Deep Dive – March 2014 Alberto Farronato, VMware Wade Holmes, VMware March, 2014
  2. 2. Software-Defined Storage 2 Bringing the efficient operational model of virtualization to storage Virtual Data Services Data Protection Mobility Performance Policy-driven Control Plane SAN / NAS SAN/NAS Pool Virtual Data Plane x86 Servers Hypervisor-converged Storage pool Object Storage Pool Cloud Object Storage Virtual SAN
  3. 3. Virtual SAN: Radically Simple Hypervisor-Converged Storage 3 vSphere + VSAN … • Runs on any standard x86 server • Policy-based management framework • Embedded in vSphere kernel • High performance flash architecture • Built-in resiliency • Deep integration with VMware stack The Basics Hard disks SSD Hard disks SSD Hard disks SSD VSAN Shared Datastore
  4. 4. 12,000+ Virtual SAN Beta Participants 95% Beta customers Recommend VSAN 90% Believe VSAN will Impact Storage like vSphere did to Compute Unprecedented Customer Interest And Validation 4
  5. 5. Why Virtual SAN? 5 • Two click Install • Single pane of glass • Policy-driven • Self-tuning • Integrated with VMware stack Radically Simple • Embedded in vSphere kernel • Flash-accelerated • Up to 2M IOPs from 32 nodes cluster • Granular and linear scaling High Performance Lower TCO • Server-side economics • No large upfront investments • Grow-as-you-go • Easy to operate with powerful automation • No specialized skillset
  6. 6. Two Ways to Build a Virtual SAN Node 6 Completely Hardware Independent 1. Virtual SAN Ready Node …with multiple options available at GA + 30 Preconfigured server ready to use Virtual SAN… 2. Build Your Own …using the Virtual SAN Compatibility Guide* Choose individual components … SSD or PCIe SAS/NL-SAS/ SATA HDDs Any Server on vSphere Hardware Compatibility List HBA/RAID Controller ⃰ Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide Page ⃰ Components for Virtual SAN must be chosen from Virtual SAN HCL, using any other components is unsupported
  7. 7. Broad Partner Ecosystem Support for Virtual SAN 7 Storage Server / Systems Solution Data Protection Solution
  8. 8. Virtual SAN Simplifies And Automates Storage Management 8 Per VM Storage Service Levels From a Single Self-tuning Datastore Storage Policy-Based Management Virtual SAN Shared Datastore vSphere + Virtual SAN SLAs Software Automates Control of Service Levels No more LUNs/Volumes! Policies Set Based on Application Needs Capacity Performance Availability Per VM Storage Policies “Virtual SAN is easy to deploy, just a few check boxes. No need to configure RAID.” — Jim Streit IT Architect, Thomson Reuters
  9. 9. Virtual SAN Delivers Enterprise-Grade Scale 9 2M IOPS 3,200 VMs 4.4 Petabytes Maximum Scalability per Virtual SAN Cluster 32 Hosts “Virtual SAN’s allows us to build out scalable heterogeneous storage infrastructure like the Facebooks and Googles of the world. Virtual SAN allows us to add scale, add resources, while being able to service high performance workloads.” — Dave Burns VP of Tech Ops, Cincinnati Bell
  10. 10. High Performance with Elastic and Linear Scalability 10 80K 160K 320K 480K 640K 253K 505K 1M 1.5M 2M 4 8 16 24 32 IOPS Number of Hosts In Virtual SAN Cluster Mixed 100% Read 286 473 677 767 805 3 5 7 8 Number of Hosts In Virtual SAN Cluster Number of VDI VMs VSAN All SSD Array Notes: based on IOmeter benchmark Mixed = 70% Read, 4K 80% random Notes: Based on View Planner benchmark Up to 2M IOPs in 32 Node Cluster Comparable VDI density to an All Flash Array
  11. 11. Virtual SAN is Deeply Integrated with VMware Stack 11 Ideal for VMware Environments CONFIDENTIAL – NDA ONLY vMotion vSphere HA DRS Storage vMotion vSphere Snapshots Linked Clones VDP Advanced vSphere Replication Data Protection VMware View Virtual Desktop vCenter Operations Manager vCloud Automation Center IaaS Cloud Ops and Automation Site Recovery Manager Disaster Recovery Site A Site B Storage Policy-Based Management
  12. 12. Virtual SAN 5.5 – Pricing And Packing 12 VSAN Editions and Bundles Virtual SAN Virtual SAN with Data Protection Virtual SAN for Desktop Overview • Standalone edition • No capacity, scale or workload restriction • Bundle of Virtual SAN and vSphere Data Protection Adv. • Standalone edition • VDI only (VMware or Citrix) • Concurrent or named users Licensing Per CPU Per CPU Per User Price (USD) $2,495 $2,875 (Promo ends Sept 15th 2014) $50 Features Persistent data store    Read / Write caching    Policy-based Management    Virtual Distributed Switch    Replication (vSphere Replication)    Snapshots and clones (vSphere Snapshots & Clones)    Backup (vSphere Data Protection Advanced)  Not for Public Disclosure NDA Material only Do not share with Public until GA Note: Regional pricing in standard VMware currencies applies. Please check local pricelists for more detail.
  13. 13. Virtual SAN – Launch Promotions 13 Virtual SAN with Data Protection Virtual SAN (1 CPU) vSphere Data Protection Advanced (1 CPU) VSA to VSAN upgrade Virtual SAN (6 CPUs per bundle) Register and download promo Virtual SAN (1 CPU) Beta PromoBundle Promos 20% 20% 20% Not for Public Disclosure NDA Material only Do not share with Public until GA $9,180 / bundle$2,875 / CPU $1,996 / CPU Promo Discount Promo Price End Date Terms 9/15/2014 9/15/2014 6/15/2014 • Min purchase of 10 CPUs • First purchase only Note: Regional pricing for promotions exist in standard VMware currencies. Please check local pricelists for more detail.
  14. 14. Virtual SAN Reduces CAPEX and OPEX for Better TCO 14 CAPEX • Server-side economics • No Fibre Channel network • Pay-as-you-grow OPEX • Simplified storage configuration • No LUNs • Managed directly through vSphere Web Client • Automated VM provisioning • Simplified capacity planning As Low as $0.50/GB2 As Low as $0.25/IOPS 5X Lower OPEX4 Up to 50% TCO Reduction As Low as $50/Desktop 1 1. Full clones 2. Usable capacity 3. Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) 4. Source: Taneja Group Not for Public Disclosure NDA Material only Do not share with Public until GA
  15. 15. Flexibly Configure For Performance And Capacity 15 Performance 2xCPU – 8-core 128GB Memory 2xCPU – 8-core 128GB Memory 2xCPU – 8-core 128GB Memory 1x 400GB MLC SSD (~15% of usable capacity) 1x 400GB MLC SSD (~10% of usable capacity) 2x 400GB MLC SSD (~4% of usable capacity) 5x 1.2TB 10K SAS 7x 2TB 7.2K NL-SAS 10x 4TB 7.2K NL-SAS IOPS1 Raw Capacity ~20-15K 6TB ~15-10K 14TB ~10-5K 40TB Capacity 1. Mix workload 70% Read, 80% Random Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) $0.32/IOPS $2.12/GB $0.57/IOPS $1.02/GB $1.38/IOPS $0.52/GB Not for Public Disclosure NDA Material only Do not share with Public until GA
  16. 16. • Compared to external storage at scale • Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs) • Additional savings come from reduced Opex through automation • Virtual SAN configuration: 9 VMs per core, with 40GB per VM, 2 copies for availability and 10% SSD for performance Granular Scaling Eliminates Overprovisioning Delivers Predictable Scaling and ability to Control Costs VSAN enables predictable linear scaling Spikes correspond to scaling out due to IOPs requirements 16 $40 $90 $140 $190 $240 500 1000 1500 2000 2500 3000 StorageCostPerDesktop Number of Desktops $/VDI Storage Cost Virtual SAN Midrange Hybrid Array Not for Public Disclosure NDA Material only Do not share with Public until GA
  17. 17. Running a Google-like Datacenter 17 Modular infrastructure. Break-Replace Operations "From a break fix perspective, I think there's a huge difference in what needs to be done when a piece of hardware fails. I can have anyone on my team go back and replace a 1U or 2U servers. … essentially modularizing my datacenter and delivering a true Software-Defined Storage architecture." — Ryan Hoenle Director of IT, DOE Fund
  18. 18. Hardware Requirements 18 Any Server on the VMware Compatibility Guide • SSD, HDD, and Storage Controllers must be listed on the VMware Compatibility Guide for VSAN http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan • Minimum 3 ESXi 5.5 Hosts, Maximum Hosts ―I’ll tell you later……‖ 1Gb/10Gb NIC SAS/SATA Controllers (RAID Controllers must work in ―pass-through‖ or RAID0‖ mode SAS/SATA/PCIe SSD SAS/NL-SAS/SATA HDD At least 1 of each 4GB to 8GB USB, SD Cards
  19. 19. Flash Based Devices VMware SSD Performance Classes – Class A: 2,500-5,000 writes per second – Class B: 5,000-10,000 writes per second – Class C: 10,000-20,000 writes per second – Class D: 20,000-30,000 writes per second – Class E: 30,000+ writes per second Examples – Intel DC S3700 SSD ~36000 writes per second -> Class E – Toshiba SAS SSD MK2001GRZB ~16000 writes per second -> Class C Workload Definition – Queue Depth: 16 or less – Transfer Length: 4KB – Operations: write – Pattern: 100% random – Latency: less than 5 ms Endurance – 10 Drive Writes per Day (DWPD), and – Random write endurance up to 3.5 PB on 8KB transfer size per NAND module, or 2.5 PB on 4KB transfer size per NAND module 19
  20. 20. Flash Capacity Sizing  The general recommendation for sizing Virtual SAN's flash capacity is to have 10% of the anticipated consumed storage capacity before the Number of Failures To Tolerate is considered.  Total flash capacity percentage should be based on use case, capacity and performance requirements. – 10% is a general recommendation, could be too much or it may not be enough. Measurement Requirements Values Projected VM space usage 20GB Projected number of VMs 1000 Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB Target flash capacity percentage 10% Total flash capacity required 20TB x .10 = 2 TB
  21. 21. Multi-level cell SSD (or better) or PCIe SSD SAS/NL-SAS HDD Select SATA HDDs Any Server on vSphere Hardware Compatibility List * Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide 6Gb enterprise-grade HBA/RAID Controller 1 2 Build your ownVSAN Ready Node …with 10 different options between multiple 3rd party vendors available at GA Preconfigured server ready to use VSAN… …using the VSAN Compatibility Guide* Choose individual components … Two Ways to Build a Virtual SAN Node Radically Simple Hypervisor-Converged Storage
  22. 22. Virtual SAN Implementation Requirements • Virtual SAN requires: – Minimum of 3 hosts in a cluster configuration – All 3 host MUST!!! contribute storage • vSphere 5.5 U1 or later – Locally attached disks • Magnetic disks (HDD) • Flash-based devices (SSD) – Network connectivity • 1GB Ethernet • 10GB Ethernet (preferred) 22 esxi-01 local storage local storage local storage vSphere 5.5 U1 Cluster esxi-02 esxi-03 cluster HDDHDD HDD
  23. 23. Virtual SAN Scalable Architecture 23 • Scale up and Scale out architecture – granular and linearly storage, performance and compute scaling capabilities – Per magnetic disks – for capacity – Per flash based device – for performance – Per disk group – for performance and capacity – Per node – for compute capacity disk group disk group disk group VSAN network VSAN networkVSAN network vsanDatastore HDD disk group HDD HDD HDD disk group VSAN network HDD scaleup scale out
  24. 24. Oh yeah! Scalability….. 24 vsanDatastore 4.4 Petabytes 2 Million IOPS 32 Hosts
  25. 25. Storage Policy-based Management • SPBM is a storage policy framework built into vSphere that enables virtual machine policy driven provisioning. • Virtual SAN leverages this new framework in conjunction with VASA API’s to expose storage characteristics to vCenter: – Storage capabilities • Underlying storage surfaces up to vCenter and what it is capable of offering. – Virtual machine storage requirements • Requirements can only be used against available capabilities. – VM Storage Policies • Construct that stores virtual machine’s storage provisioning requirements based on storage capabilities. 25
  26. 26. Storage Policy Wizard SPBM VSAN object VSAN object manager virtual disk VSAN objects may be (1) mirrored across hosts & (2) striped across disks/hosts to meet VM storage profile policies Datastore Profile Virtual SAN SPBM Object Provisioning Mechanism
  27. 27. Virtual SAN Disk Groups • Virtual SAN uses the concept of disk groups to pool together flash devices and magnetic disks as single management constructs. • Disk groups are composed of at least 1 flash device and 1 magnetic disk. – Flash devices are use for performance (Read cache + Write buffer). – Magnetic disks are used for storage capacity. – Disk groups cannot be created without a flash device. 27 disk group disk group disk group disk group Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs disk group HDD HDDHDDHDDHDD
  28. 28. Virtual SAN Datastore • Virtual SAN is an object store solution that is presented to vSphere as a file system. • The object store mounts the VMFS volumes from all hosts in a cluster and presents them as a single shared datastore. – Only members of the cluster can access the Virtual SAN datastore – Not all hosts need to contribute storage, but its recommended. 28 disk group disk group disk group disk group Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs disk group VSAN network VSAN network VSAN network VSAN networkVSAN network vsanDatastore HDD HDDHDDHDDHDD
  29. 29. Virtual SAN Network • New Virtual SAN traffic VMkernel interface. – Dedicated for Virtual SAN intra-cluster communication and data replication. • Supports both Standard and Distributes vSwitches – Leverage NIOC for QoS in shared scenarios • NIC teaming – used for availability and not for bandwidth aggregation. • Layer 2 Multicast must be enabled on physical switches. – Much easier to manage and implement than Layer 3 Multicast 29 Management Virtual Machines vMotion Virtual SAN Distributed Switch 20 shares 30 shares 50 shares 100 shares uplink1 uplink2 vmk1 vmk2vmk0
  30. 30. Virtual SAN Network • NIC teamed and load balancing algorithms: – Route based on Port ID • active / passive with explicit failover – Route based on IP Hash • active / active with LACP port channel – Route based on Physical NIC load • active / active with LACP port channel Management Virtual Machines vMotion Virtual SAN Distributed Switch 100 shares 150 shares 250 shares 500 shares uplink1 uplink2 vmk1 vmk2vmk0 Multi chassis link aggregation capable switches
  31. 31. VMware Virtual SAN Interoperability Technologies and Products
  32. 32. VMware Virtual SAN Configuration Walkthrough
  33. 33. Configuring VMware Virtual SAN • Radically Simple configuration procedure 33 Setup Virtual SAN Network Enable Virtual SAN on the Cluster Select Manual or Automatic If Manual, create disk groups
  34. 34. Configure Network 34 • Configure the new dedicated Virtual SAN network – vSphere Web Client network template configuration feature.
  35. 35. Enable Virtual SAN • One click away!!! – Virtual SAN configured in Automatic mode, all empty local disks are claimed by Virtual SAN for the creation of the distributed vsanDatastore. – Virtual SAN configured in Manual mode, the administrator must manually select disks to add the the distributed vsanDatastore by creating Disk Groups. 35
  36. 36. Virtual SAN Datastore • A single Virtual SAN Datastore is created and mounted, using storage from all multiple hosts and disk groups in the cluster. • Virtual SAN Datastore is automatically presented to all hosts in the cluster. • Virtual SAN Datastore enforces thin-provisioning storage allocation by default. 36
  37. 37. Virtual SAN Capabilities • Virtual SAN currently surfaces five unique storage capabilities to vCenter. 37
  38. 38. Number of Failures to Tolerate • Number of failures to tolerate – Defines the number of hosts, disk or network failures a storage object can tolerate. For ―n‖ failures tolerated, ―n+1‖ copies of the object are created and ―2n+1‖ host contributing storage are required. 38 vsan network vmdkvmdk witness esxi-01 esxi-02 esxi-03 esxi-04 ~50% of I/O ~50% of I/O Virtual SAN Policy: ―Number of failures to tolerate = 1‖ raid-1
  39. 39. Number of Disk Stripes Per Object • Number of disk stripes per object – The number of HDDs across which each replica of a storage object is distributed. Higher values may result in better performance. 39 vsan network stripe-2b witness esxi-01 esxi-02 esxi-03 esxi-04 stripe-1b stripe-1a stripe-2a raid-0raid-0 VSAN Policy: ―Number of failures to tolerate = 1‖ + ―Stripe Width =2‖ raid-1
  40. 40. Managing Failure Scenarios  Through policies, VM’s on Virtual SAN can tolerate multiple failures – Disk Failure – degraded event – SSD Failure – degraded event – Controller Failure – degraded event – Network Failure – absent event – Server Failure – absent event  VM’s continue to run  Parallel rebuilds minimize performance pain – SSD Fail – immediately – HDD Fail – immediately – Controller Fail – immediately – Network Fail – 60 minutes – Host Fail – 60 minutes 40
  41. 41. Virtual SAN Storage Capabilities • Force provisioning – if yes, the object will be provisioned even is the policy specified in the storage policy is not satisfiable with the resources currently available. • Flash read cache reservation (%) – Flash capacity reserved as read cache for the storage object. Specified as a percentage of logical size of the object. • Object space reservation (%) – Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM provisioning. The rest of the storage object is thin provisioned. 41
  42. 42. VM Storage Policies Recommendations • Number of Disk Stripes per object – Should be left at 1, unless the IOPS requirements of the VM is not being met by the flash layer. • Flash Read Cache Reservation – Should be left at 0, unless there is a specific performance requirement to be met by a VM. • Proportional Capacity – Should be left at 0, unless thick provisioning of virtual machines is required. • Force Provisioning – Should be left disabled, unless the VM needs to be provisioned, even if not in compliance. 42
  43. 43. Failure Handling Philosophy  Traditional SANs – Physical drive needs to be replaced to get back to full redundancy – Hot-spare disks are set aside to take role of failed disks immediately – In both cases: 1:1 replacement of disk  Virtual SAN – Entire cluster is a ―hot-spare‖, we always want to get back to full redundancy – When a disk fails, many small components (stripes or mirrors of objects) fail – New copies of these components can be spread around the cluster for balancing – Replacement of the physical disk just adds back resources
  44. 44. Understanding Failure Events  Degraded events are responsible to trigger the immediate recovery operations. – Triggers the immediate recovery operation of objects and components – Not configurable  Any of the following detected I/O errors are always deemed degraded: – Magnetic disk failures – Flash based devices failures – Storage controller failures  Any of the following detected I/O errors are always deemed absent: – Network failures – Network Interface Cards (NICs) – Host failures 44
  45. 45. Maintenance Mode – planned downtime  3 Maintenance mode options:  Ensure accessibility  Full data migration  No data migration
  46. 46. For more information, visit: http://www.vmware.com/products/virtual-san

Notas do Editor

  • With Software-Defined Storage, we’re taking the operational model we pioneered in compute – and extending that to storageSoftware-Defined Storage allows businesses to more efficiently manage their same storage infrastructure with software. How?CLICKFirst, Abstracting and pooling physical storage resources to create flexible logical pools of storage in the virtual data plane. We see three main pools going forward: SAN/NAS pool (enabled by VVOL), hypervisor-converged (enabled by Virtual SAN) and CloudCLICKSecond, providing VM-level data services like replication, snapshots caching, etc. from a broad partner ecosystemCLICKLastly, enabling application-centric approach based on a common policy-based control plane. Storage requirements are captured for each individual VM in simple intuitive policies that follow the VM through its life cycle on any infrastructure. This policy-based management framework allows for seamless automation and orchestration, with the Virtual SAN software dynamically making adjustments to underlying storage pools to ensure application-driven policies are compliant and SLAs are met.CLICKIntegration and interoperability with our storage ecosystem is a key element of our strategy. Across all elements SDS we plan to enable integration points through APIs that will allow our partners to enable value added capabilities on top of our platform.Above are a list of partners that we have been working with to make the Software-Defined Storage solution a reality for our customers.For example, EMC’s ViPR technology abstracts and pools third party external storage to create a virtual control plane for heterogeneous external storage. This is a great example of how Software-Defined Storage ecosystem vendors leverage the VMware platform to give customers more choice and the ability to transform their storage model.Software-Defined Storage is using virtualization software to create a fundamentally new approach to storage that removes unnecessary complexity, puts the application in charge, and delivers many of the same benefits we see from SDDC… including simplicity, high performance, and increased efficiency. T: Today, we’re excited to announce Virtual SAN…
  • BEN TALKING:Abstracts and pools server-side disks and flash => shared datastoreCLICKDecouples software from hardware // Converts physical to virtualEmbedded in ESXi kernel to create high performance storage tier running on x86 serversPolicy-based management framework automates routine tasksCreates a resilient, scalable storage tier that is easy to useGives users the flexibility to configure the storage they needT: Virtual SAN is a true Software-Defined Storage product that runs on standard x86 servers, giving users deployment flexibility…
  • We announced the public beta of Virtual SAN at Vmworld last year and it’s been a great success story.We had over 10,000 registered participants We’ve seen a lot of excitement and response from customers. The team has over-achieved. We promised we’d deliver vSAN in the first half of 2014. As you know, that usually means June 32nd.But I’m glad to announce that we’re almost ready and will be releasing vSAN ahead of schedule in Q1.We also promised an 8-node solution for the first release, but I’m proud to announce that we’re going to support 16 nodes at GA.Finally, to thank ourBeta Customers, we’re offering a 20% discount on their first purchase.
  • BEN TALKING:2 ways to deploy => ready node or component basedVSAN is completely HW independentFlexibility of configuration to optimize for performance or capacityReady Node:VMW working with OEM server vendors => “VIRTUAL SAN Ready Nodes”Servers designed to make it easy to run Virtual SANBuild Your Own:VMW certifying VSAN to run on many different types of hardwareServers, magnetic disks, solid state drives and controllers.Gives you the flexibility to choose… build storage system based on your needsVMware believes that a true Software-Defined Storage product gives users the flexibility when constructing storage architecturesT: VMware has been working with a broad array of ecosystem vendors to make this a reality…
  • BEN TALKING:We have built a robust, global eco-system around Virtual SANIncludes all major server manufacturers and systems solutions..Includes a broad range of hardware components such as controllers and disks…And a variety of data protection solutions.As part of the SDDC approach Pat laid out, it is VMware offer customers great flexibility of choiceT: In addition to being hardware independent, VSAN has a policy-based management framework built-in to simplify storage
  • BEN TALKING:SPBM framework allows you to define storage requirements based on application needs. CLICKIt is simple => capacity, performance and availabilityCLICKVSAN matches requirements to underlying capabilities. Unlike traditional external storage => provisioning done at array layerAutomation: policies governed by SLAsCLICKOrchestration: software abstracts underlying hardwareEnd result => No more LUNs of Volumes…T: To give you a better idea, let me show you how all of this works together (DEMO)John:You mentioned policy-based framework. Help me understand how that works as I believe that is a fairly new concept when it comes to storage.
  • BEN TALKING:Beyond the big numbers on this page….…Virtual SAN scales to the needs of your environmentPowerful storage tier running on heterogeneous server hardwareMost importantly…scales to the needs of customers.32 node VSAN cluster4.4 TBs of capacity 2M IOPs3,200 VMsNot a toyIdeal and viable storage tier for vSphere environmentsVSAN is high performance, scalable and resilient… and runs on heterogeneous hardware JOHN TALKINGThat’s great, Ben.Couldn’t you just add more hardware to any other storage technologies in the market today to increase capacity?T: What is impressive about Virtual SAN is not just its maximum capacity or IOPs… it is its efficiency and how it gets to these numbers…
  • BEN TALKINGYes… Virtual SAN scales to 32 nodes and 2M IOPs, but it does so in a predictable and linear fashionThis is particularly helpful if you are trying to forecast storage capacity….… or have a latent application in need of more performanceVirtual SAN gives you the ability to granularly scale-up or scale-out your clusterAdd more resources to achieve an intended outcomeOne customer quote I liked from the beta was … “We can customize IO and capacity on demand”Eliminates costly overprovisioningPause…As customers look for every edge possible about efficiency, Virtual SAN delivers on thisThis gives you the control to have Google-like and Amazon-like efficiency within your private cloudOn the left…Linear and Predictable performanceScales with your environmentSame functionality across different types of workloadsOn the right…High VM density in VDI environments. Performance isn’t a constraintVSAN has VM densities comparable to an all-flash array 
  • (SLIDE AUTOMATICALLY BUILDS)BEN TALKING:Interoperability a key differentiator for Virtual SANMakes the product easy to use for our customers[GO AROUND TO TALK THROUGH PRODUCTS]High degree of convenience … makes storage simple for customers John:This is great to hear that Virtual SAN is resilient and interoperates with other VMware products. Could you show me how this works?BEN:SureT: Let me show you how this works in the product
  • Drivers on the right – Arrow – Bubbles (with range) $2.5GB50% tco reduction5-10x opexAlign Costs with RevenueTake advantage of decreasing HW prices
  • Increase the performanceGet better economicsSave on CPU resources--So the cost of an I/O, in CPU cycles and overhead, is important. Gray and Shenoy derive some rules of thumb for I/O costs:A disk I/O costs 5,000 instructions and 0.1 instructions per byteThe CPU cost of a Systems Area Network (SAN) network message is 3,000 clocks and 1 clock per byteA network message costs 10,000 instructions and 10 instructions per bytefor an 8KB I/O, which is a standard I/O size for Unix systems, it costsDisk: 5,000 + 800 = 5,800 instructionsSAN: 3,000 + 8,000 = 11,000 clocksNetwork: 10,000 + 80,000 = 90,000 instructionsThus it is obvious why IDCs implement local disks in general preference to SANs or networks. Not only is it cheaper economically, it is much cheaper in CPU resources. Looked another way, this simply confirms what many practitioners already have ample experience with: the EDC architecture doesn’t scale easily or well.------------------Two I/O intensive techniques are RAID 5 and RAID 6. In RAID 5, writing a block typically requires four disk accesses: two to read the existing data and parity and two more to write the new data and parity (RAID 6 requires even more). Not surprisingly, Google avoids RAID 5 or RAID 6 and favors mirroring, typically mirroring each chunk of data at least three times and many more times if it is hot. This effectively increases the IOPS per chunk of data at the expense of capacity, which is much cheaper than additional bandwidth or cache.
  • SSD InterfacePCIe vs SAS vs SATA – not really a decision point for performance, as the corresponding IOPS performance will dictate the interface selection.
  • Speaker notes:vCenter is requirement for management since the VSAN is fully integrated into vSphere. A minimum of 3 nodes and a maximum of 8 nodes (though there is some discussion around a higher node count in later versions). SSD must make up 10% of all storage, but it could be larger than that.We are also recommending a dedicated 10Gb network for VSAN too. We are in fact recommending a NIC team of 2 x 10Gb NICs for availability purposes.vCenter server version 5.5Central point of management3 vSphere Hosts minimumRunning ESXi version 5.5 or laterNot all hosts need to have local storage. Some can be just compute nodesMaximum of 8 nodes in a cluster in version 1.0. Greater than 8 nodes planned for future releasesLocal StorageCombination of HDD & SSDSSD’s used as a read cache and write bufferHDD’s used as persistent storeSAS/SATA ControllerRaid Controller must work in “pass-thru” or “HBA” mode (no RAID)1Gb or 10Gb Network (preferred)Cluster communication/replicationWe have not completed any real characterization yet, but it is expected that the overhead of CPU/Memory for VSAN is in the region of 10%. VSAN supports the concept of compute nodes – ESXi hosts which do not present any storage, but still has access to, and can run VMs on the distributed datastore. Best Practices:- min 3 nodes with storage- have a balanced cluster using identical host configurations- Regarding boot image: no stateless, preferred is to use SD card/USB/satadom
  • Largeststorage capacities:5 disk groups * 7 HDDs * 4TB* 8 hosts = 1.1 PT5 disk groups * 7 HDDs * 4TB* 16 hosts = 2.2 PT
  • Enable multicastDisabling IGMP snoopingConfiguring IGMP snooping for selective trafficVSAN vmkernel multicast traffic should be isolated to a layer 2 non-routable VLANLayer 2 multicast traffic can be limited to specific port group using IGMP snoopingWe do not recommend implementing multicast flooding across all ports as a best practiceWe do not require layer 3 multicast

×