1. Š 2014 VMware Inc. All rights reserved.
VMware Virtual SAN 5.5
Technical Deep Dive â March 2014
Alberto Farronato, VMware
Wade Holmes, VMware
March, 2014
2. Software-Defined Storage
2
Bringing the efficient operational model of virtualization to storage
Virtual Data Services
Data Protection Mobility Performance
Policy-driven Control Plane
SAN / NAS
SAN/NAS Pool
Virtual Data Plane
x86 Servers
Hypervisor-converged
Storage pool
Object Storage Pool
Cloud Object
Storage
Virtual SAN
3. Virtual SAN: Radically Simple Hypervisor-Converged Storage
3
vSphere + VSAN
âŚ
⢠Runs on any standard x86 server
⢠Policy-based management framework
⢠Embedded in vSphere kernel
⢠High performance flash architecture
⢠Built-in resiliency
⢠Deep integration with VMware stack
The Basics
Hard disks
SSD
Hard disks
SSD
Hard disks
SSD
VSAN Shared Datastore
4. 12,000+
Virtual SAN Beta
Participants
95%
Beta customers
Recommend
VSAN
90%
Believe VSAN will
Impact Storage like
vSphere did to
Compute
Unprecedented Customer Interest And Validation
4
5. Why Virtual SAN?
5
⢠Two click Install
⢠Single pane of glass
⢠Policy-driven
⢠Self-tuning
⢠Integrated with VMware stack
Radically Simple
⢠Embedded in vSphere kernel
⢠Flash-accelerated
⢠Up to 2M IOPs from 32 nodes
cluster
⢠Granular and linear scaling
High Performance Lower TCO
⢠Server-side economics
⢠No large upfront investments
⢠Grow-as-you-go
⢠Easy to operate with powerful
automation
⢠No specialized skillset
6. Two Ways to Build a Virtual SAN Node
6
Completely Hardware Independent
1. Virtual SAN Ready Node
âŚwith multiple options available at GA + 30
Preconfigured server ready to use Virtual
SANâŚ
2. Build Your Own
âŚusing the Virtual SAN Compatibility Guide*
Choose individual components âŚ
SSD or PCIe
SAS/NL-SAS/ SATA HDDs
Any Server on vSphere
Hardware Compatibility List
HBA/RAID Controller
â° Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide Page
â° Components for Virtual SAN must be chosen from Virtual SAN HCL, using any other components is unsupported
7. Broad Partner Ecosystem Support for Virtual SAN
7
Storage
Server / Systems
Solution
Data Protection
Solution
8. Virtual SAN Simplifies And Automates Storage Management
8
Per VM Storage Service Levels From a Single Self-tuning Datastore
Storage Policy-Based Management
Virtual SAN
Shared Datastore
vSphere + Virtual SAN
SLAs
Software Automates
Control of Service Levels
No more LUNs/Volumes!
Policies Set Based
on Application Needs
Capacity
Performance
Availability
Per VM
Storage Policies
âVirtual SAN is easy to deploy,
just a few check boxes. No
need to configure RAID.â
â Jim Streit
IT Architect, Thomson Reuters
9. Virtual SAN Delivers Enterprise-Grade Scale
9
2M
IOPS
3,200
VMs
4.4
Petabytes
Maximum Scalability per Virtual SAN Cluster
32
Hosts
âVirtual SANâs allows us to build out
scalable heterogeneous storage
infrastructure like the Facebooks and
Googles of the world. Virtual SAN allows
us to add scale, add resources, while
being able to service high performance
workloads.â
â Dave Burns
VP of Tech Ops, Cincinnati Bell
10. High Performance with Elastic and Linear Scalability
10
80K 160K
320K
480K
640K
253K
505K
1M
1.5M
2M
4 8 16 24 32
IOPS
Number of Hosts In Virtual SAN Cluster
Mixed 100% Read
286
473
677
767
805
3 5 7 8
Number of Hosts In Virtual SAN Cluster
Number of VDI VMs
VSAN All SSD Array
Notes: based on IOmeter benchmark
Mixed = 70% Read, 4K 80% random Notes: Based on View Planner benchmark
Up to 2M IOPs in 32 Node Cluster Comparable VDI density to an All Flash Array
11. Virtual SAN is Deeply Integrated with VMware Stack
11
Ideal for VMware Environments
CONFIDENTIAL â NDA ONLY
vMotion
vSphere HA
DRS
Storage vMotion
vSphere
Snapshots
Linked Clones
VDP Advanced
vSphere Replication
Data Protection
VMware View
Virtual Desktop
vCenter Operations Manager
vCloud Automation Center
IaaS
Cloud Ops and Automation
Site Recovery Manager
Disaster Recovery
Site A Site B
Storage Policy-Based Management
12. Virtual SAN 5.5 â Pricing And Packing
12
VSAN Editions and Bundles
Virtual SAN
Virtual SAN with Data
Protection
Virtual SAN for Desktop
Overview
⢠Standalone edition
⢠No capacity, scale or
workload restriction
⢠Bundle of Virtual SAN and
vSphere Data Protection Adv.
⢠Standalone edition
⢠VDI only (VMware or Citrix)
⢠Concurrent or named users
Licensing Per CPU Per CPU Per User
Price (USD) $2,495
$2,875
(Promo ends Sept 15th 2014)
$50
Features
Persistent data store ď ď ď
Read / Write caching ď ď ď
Policy-based Management ď ď ď
Virtual Distributed Switch ď ď ď
Replication
(vSphere Replication)
ď ď ď
Snapshots and clones
(vSphere Snapshots & Clones)
ď ď ď
Backup
(vSphere Data Protection Advanced)
ď
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
Note: Regional pricing in standard VMware currencies applies. Please check local pricelists for more detail.
13. Virtual SAN â Launch Promotions
13
Virtual SAN
with Data
Protection
Virtual SAN
(1 CPU)
vSphere Data
Protection
Advanced
(1 CPU)
VSA to VSAN
upgrade
Virtual SAN
(6 CPUs per
bundle)
Register and
download promo
Virtual SAN
(1 CPU)
Beta PromoBundle Promos
20% 20% 20%
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
$9,180 / bundle$2,875 / CPU $1,996 / CPU
Promo Discount
Promo Price
End Date
Terms
9/15/2014 9/15/2014 6/15/2014
⢠Min purchase of 10 CPUs
⢠First purchase only
Note: Regional pricing for promotions exist in standard VMware currencies. Please check local pricelists for more detail.
14. Virtual SAN Reduces CAPEX and OPEX for Better TCO
14
CAPEX
⢠Server-side economics
⢠No Fibre Channel network
⢠Pay-as-you-grow
OPEX
⢠Simplified storage configuration
⢠No LUNs
⢠Managed directly through
vSphere Web Client
⢠Automated VM provisioning
⢠Simplified capacity planning
As Low as
$0.50/GB2
As Low as
$0.25/IOPS
5X Lower
OPEX4
Up to 50%
TCO
Reduction
As Low as
$50/Desktop
1
1. Full clones
2. Usable capacity
3. Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
4. Source: Taneja Group
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
15. Flexibly Configure For Performance And Capacity
15
Performance
2xCPU â 8-core
128GB Memory
2xCPU â 8-core
128GB Memory
2xCPU â 8-core
128GB Memory
1x
400GB MLC SSD
(~15% of usable capacity)
1x
400GB MLC SSD
(~10% of usable capacity)
2x
400GB MLC SSD
(~4% of usable capacity)
5x
1.2TB 10K SAS
7x
2TB 7.2K NL-SAS
10x
4TB 7.2K NL-SAS
IOPS1
Raw Capacity
~20-15K
6TB
~15-10K
14TB
~10-5K
40TB
Capacity
1. Mix workload 70% Read, 80% Random
Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
$0.32/IOPS
$2.12/GB
$0.57/IOPS
$1.02/GB
$1.38/IOPS
$0.52/GB
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
16. ⢠Compared to external storage at scale
⢠Estimated based on 2013 street pricing, Capex (includes storage hardware + Software License costs)
⢠Additional savings come from reduced Opex through automation
⢠Virtual SAN configuration: 9 VMs per core, with 40GB per VM, 2 copies for availability and 10% SSD for performance
Granular Scaling Eliminates Overprovisioning
Delivers Predictable Scaling and ability to Control Costs
VSAN enables
predictable linear
scaling
Spikes correspond to
scaling out due to
IOPs requirements
16
$40
$90
$140
$190
$240
500 1000 1500 2000 2500 3000
StorageCostPerDesktop
Number of Desktops
$/VDI Storage Cost
Virtual SAN Midrange Hybrid Array
Not for Public Disclosure
NDA Material only
Do not share with Public until GA
17. Running a Google-like Datacenter
17
Modular infrastructure. Break-Replace Operations
"From a break fix perspective, I think
there's a huge difference in what
needs to be done when a piece of
hardware fails. I can have anyone
on my team go back and replace a
1U or 2U servers. ⌠essentially
modularizing my datacenter and
delivering a true Software-Defined
Storage architecture."
â Ryan Hoenle
Director of IT, DOE Fund
18. Hardware Requirements
18
Any Server on the VMware
Compatibility Guide
⢠SSD, HDD, and Storage Controllers must be listed on the VMware Compatibility Guide for VSAN
http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
⢠Minimum 3 ESXi 5.5 Hosts, Maximum Hosts âIâll tell you laterâŚâŚâ
1Gb/10Gb NIC
SAS/SATA Controllers (RAID Controllers must work in
âpass-throughâ or RAID0â mode
SAS/SATA/PCIe SSD
SAS/NL-SAS/SATA HDD
At least 1 of
each
4GB to 8GB USB, SD Cards
19. Flash Based Devices
VMware SSD Performance Classes
â Class A: 2,500-5,000 writes per second
â Class B: 5,000-10,000 writes per second
â Class C: 10,000-20,000 writes per second
â Class D: 20,000-30,000 writes per second
â Class E: 30,000+ writes per second
Examples
â Intel DC S3700 SSD ~36000 writes per second -> Class E
â Toshiba SAS SSD MK2001GRZB ~16000 writes per second
-> Class C
Workload Definition
â Queue Depth: 16 or less
â Transfer Length: 4KB
â Operations: write
â Pattern: 100% random
â Latency: less than 5 ms
Endurance
â 10 Drive Writes per Day (DWPD), and
â Random write endurance up to 3.5 PB on 8KB transfer size
per NAND module, or 2.5 PB on 4KB transfer size per
NAND module
19
20. Flash Capacity Sizing
ď§ The general recommendation for sizing Virtual SAN's flash capacity is to have 10% of the anticipated
consumed storage capacity before the Number of Failures To Tolerate is considered.
ď§ Total flash capacity percentage should be based on use case, capacity and performance requirements.
â 10% is a general recommendation, could be too much or it may not be enough.
Measurement Requirements Values
Projected VM space usage 20GB
Projected number of VMs 1000
Total projected space consumption per VM 20GB x 1000 = 20,000 GB = 20 TB
Target flash capacity percentage 10%
Total flash capacity required 20TB x .10 = 2 TB
21. Multi-level cell SSD (or better) or
PCIe SSD
SAS/NL-SAS HDD
Select SATA HDDs
Any Server on vSphere
Hardware Compatibility List
* Note: For additional details, please refer to Virtual SAN VMware Compatibility Guide
6Gb enterprise-grade
HBA/RAID Controller
1 2 Build your ownVSAN Ready Node
âŚwith 10 different options between
multiple 3rd party vendors available at GA
Preconfigured server ready to use VSANâŚ
âŚusing the VSAN Compatibility Guide*
Choose individual components âŚ
Two Ways to Build a Virtual SAN Node
Radically Simple Hypervisor-Converged Storage
22. Virtual SAN Implementation Requirements
⢠Virtual SAN requires:
â Minimum of 3 hosts in a cluster configuration
â All 3 host MUST!!! contribute storage
⢠vSphere 5.5 U1 or later
â Locally attached disks
⢠Magnetic disks (HDD)
⢠Flash-based devices (SSD)
â Network connectivity
⢠1GB Ethernet
⢠10GB Ethernet (preferred)
22
esxi-01
local storage local storage local storage
vSphere 5.5 U1 Cluster
esxi-02 esxi-03
cluster
HDDHDD HDD
23. Virtual SAN Scalable Architecture
23
⢠Scale up and Scale out architecture â granular and linearly storage, performance and compute
scaling capabilities
â Per magnetic disks â for capacity
â Per flash based device â for performance
â Per disk group â for performance and capacity
â Per node â for compute capacity
disk group disk group disk group
VSAN network VSAN networkVSAN network
vsanDatastore
HDD
disk group
HDD HDD HDD
disk group
VSAN network
HDD
scaleup
scale out
25. Storage Policy-based Management
⢠SPBM is a storage policy framework built into vSphere that enables virtual machine policy
driven provisioning.
⢠Virtual SAN leverages this new framework in conjunction with VASA APIâs to expose
storage characteristics to vCenter:
â Storage capabilities
⢠Underlying storage surfaces up to vCenter and what it is capable of offering.
â Virtual machine storage requirements
⢠Requirements can only be used against available capabilities.
â VM Storage Policies
⢠Construct that stores virtual machineâs storage provisioning requirements based on storage capabilities.
25
26. Storage Policy Wizard
SPBM
VSAN object
VSAN object manager
virtual disk
VSAN objects may be
(1) mirrored across hosts &
(2) striped across disks/hosts to meet VM
storage profile policies
Datastore Profile
Virtual SAN SPBM Object Provisioning Mechanism
27. Virtual SAN Disk Groups
⢠Virtual SAN uses the concept of disk groups to pool together flash devices and magnetic disks
as single management constructs.
⢠Disk groups are composed of at least 1 flash device and 1 magnetic disk.
â Flash devices are use for performance (Read cache + Write buffer).
â Magnetic disks are used for storage capacity.
â Disk groups cannot be created without a flash device.
27
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
HDD HDDHDDHDDHDD
28. Virtual SAN Datastore
⢠Virtual SAN is an object store solution that is presented to vSphere as a file system.
⢠The object store mounts the VMFS volumes from all hosts in a cluster and presents them as a
single shared datastore.
â Only members of the cluster can access the Virtual SAN datastore
â Not all hosts need to contribute storage, but its recommended.
28
disk group disk group disk group disk group
Each host: 5 disk groups max. Each disk group: 1 SSD + 1 to 7 HDDs
disk group
VSAN network VSAN network VSAN network VSAN networkVSAN network
vsanDatastore
HDD HDDHDDHDDHDD
29. Virtual SAN Network
⢠New Virtual SAN traffic VMkernel interface.
â Dedicated for Virtual SAN intra-cluster communication and data replication.
⢠Supports both Standard and Distributes vSwitches
â Leverage NIOC for QoS in shared scenarios
⢠NIC teaming â used for availability and not for bandwidth aggregation.
⢠Layer 2 Multicast must be enabled on physical switches.
â Much easier to manage and implement than Layer 3 Multicast
29
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
20 shares 30 shares 50 shares 100 shares
uplink1 uplink2
vmk1 vmk2vmk0
30. Virtual SAN Network
⢠NIC teamed and load balancing algorithms:
â Route based on Port ID
⢠active / passive with explicit failover
â Route based on IP Hash
⢠active / active with LACP port channel
â Route based on Physical NIC load
⢠active / active with LACP port channel
Management Virtual Machines vMotion Virtual SAN
Distributed Switch
100 shares 150 shares 250 shares 500 shares
uplink1 uplink2
vmk1 vmk2vmk0
Multi chassis link aggregation capable switches
33. Configuring VMware Virtual SAN
⢠Radically Simple configuration procedure
33
Setup Virtual SAN
Network
Enable Virtual SAN
on the Cluster
Select Manual or
Automatic
If Manual, create
disk groups
35. Enable Virtual SAN
⢠One click away!!!
â Virtual SAN configured in Automatic mode, all empty local disks are claimed by Virtual SAN for the
creation of the distributed vsanDatastore.
â Virtual SAN configured in Manual mode, the administrator must manually select disks to add the the
distributed vsanDatastore by creating Disk Groups.
35
36. Virtual SAN Datastore
⢠A single Virtual SAN Datastore is created and mounted, using storage from all multiple hosts
and disk groups in the cluster.
⢠Virtual SAN Datastore is automatically presented to all hosts in the cluster.
⢠Virtual SAN Datastore enforces thin-provisioning storage allocation by default.
36
38. Number of Failures to Tolerate
⢠Number of failures to tolerate
â Defines the number of hosts, disk or network failures a storage object can tolerate. For ânâ failures
tolerated, ân+1â copies of the object are created and â2n+1â host contributing storage are required.
38
vsan network
vmdkvmdk witness
esxi-01 esxi-02 esxi-03 esxi-04
~50% of I/O ~50% of I/O
Virtual SAN Policy: âNumber of failures to tolerate = 1â
raid-1
39. Number of Disk Stripes Per Object
⢠Number of disk stripes per object
â The number of HDDs across which each replica of a storage object is distributed. Higher values may
result in better performance.
39
vsan network
stripe-2b witness
esxi-01 esxi-02 esxi-03 esxi-04
stripe-1b
stripe-1a stripe-2a
raid-0raid-0
VSAN Policy: âNumber of failures to tolerate = 1â + âStripe Width =2â
raid-1
41. Virtual SAN Storage Capabilities
⢠Force provisioning
â if yes, the object will be provisioned even is the policy specified in the storage policy is not satisfiable
with the resources currently available.
⢠Flash read cache reservation (%)
â Flash capacity reserved as read cache for the storage object. Specified as a percentage of logical size
of the object.
⢠Object space reservation (%)
â Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM
provisioning. The rest of the storage object is thin provisioned.
41
42. VM Storage Policies Recommendations
⢠Number of Disk Stripes per object
â Should be left at 1, unless the IOPS requirements of the VM is not being met by the flash layer.
⢠Flash Read Cache Reservation
â Should be left at 0, unless there is a specific performance requirement to be met by a VM.
⢠Proportional Capacity
â Should be left at 0, unless thick provisioning of virtual machines is required.
⢠Force Provisioning
â Should be left disabled, unless the VM needs to be provisioned, even if not in compliance.
42
43. Failure Handling Philosophy
ď§ Traditional SANs
â Physical drive needs to be replaced to get back to full redundancy
â Hot-spare disks are set aside to take role of failed disks immediately
â In both cases: 1:1 replacement of disk
ď§ Virtual SAN
â Entire cluster is a âhot-spareâ, we always want to get back to full redundancy
â When a disk fails, many small components (stripes or mirrors of objects) fail
â New copies of these components can be spread around the cluster for balancing
â Replacement of the physical disk just adds back resources
44. Understanding Failure Events
ď§ Degraded events are responsible to trigger the immediate recovery operations.
â Triggers the immediate recovery operation of objects and components
â Not configurable
ď§ Any of the following detected I/O errors are always deemed degraded:
â Magnetic disk failures
â Flash based devices failures
â Storage controller failures
ď§ Any of the following detected I/O errors are always deemed absent:
â Network failures
â Network Interface Cards (NICs)
â Host failures
44
45. Maintenance Mode â planned downtime
ď§ 3 Maintenance mode options:
ď§ Ensure accessibility
ď§ Full data migration
ď§ No data migration
With Software-Defined Storage, weâre taking the operational model we pioneered in compute â and extending that to storageSoftware-Defined Storage allows businesses to more efficiently manage their same storage infrastructure with software. How?CLICKFirst, Abstracting and pooling physical storage resources to create flexible logical pools of storage in the virtual data plane. We see three main pools going forward: SAN/NAS pool (enabled by VVOL), hypervisor-converged (enabled by Virtual SAN) and CloudCLICKSecond, providing VM-level data services like replication, snapshots caching, etc. from a broad partner ecosystemCLICKLastly, enabling application-centric approach based on a common policy-based control plane. Storage requirements are captured for each individual VM in simple intuitive policies that follow the VM through its life cycle on any infrastructure. This policy-based management framework allows for seamless automation and orchestration, with the Virtual SAN software dynamically making adjustments to underlying storage pools to ensure application-driven policies are compliant and SLAs are met.CLICKIntegration and interoperability with our storage ecosystem is a key element of our strategy. Across all elements SDS we plan to enable integration points through APIs that will allow our partners to enable value added capabilities on top of our platform.Above are a list of partners that we have been working with to make the Software-Defined Storage solution a reality for our customers.For example, EMCâs ViPR technology abstracts and pools third party external storage to create a virtual control plane for heterogeneous external storage. This is a great example of how Software-Defined Storage ecosystem vendors leverage the VMware platform to give customers more choice and the ability to transform their storage model.Software-Defined Storage is using virtualization software to create a fundamentally new approach to storage that removes unnecessary complexity, puts the application in charge, and delivers many of the same benefits we see from SDDC⌠including simplicity, high performance, and increased efficiency. T: Today, weâre excited to announce Virtual SANâŚ
BEN TALKING:Abstracts and pools server-side disks and flash => shared datastoreCLICKDecouples software from hardware // Converts physical to virtualEmbedded in ESXi kernel to create high performance storage tier running on x86 serversPolicy-based management framework automates routine tasksCreates a resilient, scalable storage tier that is easy to useGives users the flexibility to configure the storage they needT: Virtual SAN is a true Software-Defined Storage product that runs on standard x86 servers, giving users deployment flexibilityâŚ
We announced the public beta of Virtual SAN at Vmworld last year and itâs been a great success story.We had over 10,000 registered participants Weâve seen a lot of excitement and response from customers. The team has over-achieved. We promised weâd deliver vSAN in the first half of 2014. As you know, that usually means June 32nd.But Iâm glad to announce that weâre almost ready and will be releasing vSAN ahead of schedule in Q1.We also promised an 8-node solution for the first release, but Iâm proud to announce that weâre going to support 16 nodes at GA.Finally, to thank ourBeta Customers, weâre offering a 20% discount on their first purchase.
BEN TALKING:2 ways to deploy => ready node or component basedVSAN is completely HW independentFlexibility of configuration to optimize for performance or capacityReady Node:VMW working with OEM server vendors => âVIRTUAL SAN Ready NodesâServers designed to make it easy to run Virtual SANBuild Your Own:VMW certifying VSAN to run on many different types of hardwareServers, magnetic disks, solid state drives and controllers.Gives you the flexibility to choose⌠build storage system based on your needsVMware believes that a true Software-Defined Storage product gives users the flexibility when constructing storage architecturesT: VMware has been working with a broad array of ecosystem vendors to make this a realityâŚ
BEN TALKING:We have built a robust, global eco-system around Virtual SANIncludes all major server manufacturers and systems solutions..Includes a broad range of hardware components such as controllers and disksâŚAnd a variety of data protection solutions.As part of the SDDC approach Pat laid out, it is VMware offer customers great flexibility of choiceT: In addition to being hardware independent, VSAN has a policy-based management framework built-in to simplify storage
BEN TALKING:SPBM framework allows you to define storage requirements based on application needs. CLICKIt is simple => capacity, performance and availabilityCLICKVSAN matches requirements to underlying capabilities. Unlike traditional external storage => provisioning done at array layerAutomation: policies governed by SLAsCLICKOrchestration: software abstracts underlying hardwareEnd result => No more LUNs of VolumesâŚT: To give you a better idea, let me show you how all of this works together (DEMO)John:You mentioned policy-based framework. Help me understand how that works as I believe that is a fairly new concept when it comes to storage.
BEN TALKING:Beyond the big numbers on this pageâŚ.âŚVirtual SAN scales to the needs of your environmentPowerful storage tier running on heterogeneous server hardwareMost importantlyâŚscales to the needs of customers.32 node VSAN cluster4.4 TBs of capacity 2M IOPs3,200 VMsNot a toyIdeal and viable storage tier for vSphere environmentsVSAN is high performance, scalable and resilient⌠and runs on heterogeneous hardware JOHN TALKINGThatâs great, Ben.Couldnât you just add more hardware to any other storage technologies in the market today to increase capacity?T: What is impressive about Virtual SAN is not just its maximum capacity or IOPs⌠it is its efficiency and how it gets to these numbersâŚ
BEN TALKINGYes⌠Virtual SAN scales to 32 nodes and 2M IOPs, but it does so in a predictable and linear fashionThis is particularly helpful if you are trying to forecast storage capacityâŚ.⌠or have a latent application in need of more performanceVirtual SAN gives you the ability to granularly scale-up or scale-out your clusterAdd more resources to achieve an intended outcomeOne customer quote I liked from the beta was ⌠âWe can customize IO and capacity on demandâEliminates costly overprovisioningPauseâŚAs customers look for every edge possible about efficiency, Virtual SAN delivers on thisThis gives you the control to have Google-like and Amazon-like efficiency within your private cloudOn the leftâŚLinear and Predictable performanceScales with your environmentSame functionality across different types of workloadsOn the rightâŚHigh VM density in VDI environments. Performance isnât a constraintVSAN has VM densities comparable to an all-flash arrayÂ
(SLIDE AUTOMATICALLY BUILDS)BEN TALKING:Interoperability a key differentiator for Virtual SANMakes the product easy to use for our customers[GO AROUND TO TALK THROUGH PRODUCTS]High degree of convenience ⌠makes storage simple for customers John:This is great to hear that Virtual SAN is resilient and interoperates with other VMware products. Could you show me how this works?BEN:SureT: Let me show you how this works in the product
Drivers on the right â Arrow â Bubbles (with range) $2.5GB50% tco reduction5-10x opexAlign Costs with RevenueTake advantage of decreasing HW prices
Increase the performanceGet better economicsSave on CPU resources--So the cost of an I/O, in CPU cycles and overhead, is important. Gray and Shenoy derive some rules of thumb for I/O costs:A disk I/O costs 5,000 instructions and 0.1 instructions per byteThe CPU cost of a Systems Area Network (SAN) network message is 3,000 clocks and 1 clock per byteA network message costs 10,000 instructions and 10 instructions per bytefor an 8KB I/O, which is a standard I/O size for Unix systems, it costsDisk: 5,000 + 800 = 5,800 instructionsSAN: 3,000 + 8,000 = 11,000 clocksNetwork: 10,000 + 80,000 = 90,000 instructionsThus it is obvious why IDCs implement local disks in general preference to SANs or networks. Not only is it cheaper economically, it is much cheaper in CPU resources. Looked another way, this simply confirms what many practitioners already have ample experience with: the EDC architecture doesnât scale easily or well.------------------Two I/O intensive techniques are RAID 5 and RAID 6. In RAID 5, writing a block typically requires four disk accesses: two to read the existing data and parity and two more to write the new data and parity (RAID 6 requires even more). Not surprisingly, Google avoids RAID 5 or RAID 6 and favors mirroring, typically mirroring each chunk of data at least three times and many more times if it is hot. This effectively increases the IOPS per chunk of data at the expense of capacity, which is much cheaper than additional bandwidth or cache.
SSD InterfacePCIe vs SAS vs SATA â not really a decision point for performance, as the corresponding IOPS performance will dictate the interface selection.
Speaker notes:vCenter is requirement for management since the VSAN is fully integrated into vSphere. A minimum of 3 nodes and a maximum of 8 nodes (though there is some discussion around a higher node count in later versions). SSD must make up 10% of all storage, but it could be larger than that.We are also recommending a dedicated 10Gb network for VSAN too. We are in fact recommending a NIC team of 2 x 10Gb NICs for availability purposes.vCenter server version 5.5Central point of management3 vSphere Hosts minimumRunning ESXi version 5.5 or laterNot all hosts need to have local storage. Some can be just compute nodesMaximum of 8 nodes in a cluster in version 1.0. Greater than 8 nodes planned for future releasesLocal StorageCombination of HDD & SSDSSDâs used as a read cache and write bufferHDDâs used as persistent storeSAS/SATA ControllerRaid Controller must work in âpass-thruâ or âHBAâ mode (no RAID)1Gb or 10Gb Network (preferred)Cluster communication/replicationWe have not completed any real characterization yet, but it is expected that the overhead of CPU/Memory for VSAN is in the region of 10%. VSAN supports the concept of compute nodes â ESXi hosts which do not present any storage, but still has access to, and can run VMs on the distributed datastore. Best Practices:- min 3 nodes with storage- have a balanced cluster using identical host configurations- Regarding boot image: no stateless, preferred is to use SD card/USB/satadom
Largeststorage capacities:5 disk groups * 7 HDDs * 4TB* 8 hosts = 1.1 PT5 disk groups * 7 HDDs * 4TB* 16 hosts = 2.2 PT
Enable multicastDisabling IGMP snoopingConfiguring IGMP snooping for selective trafficVSAN vmkernel multicast traffic should be isolated to a layer 2 non-routable VLANLayer 2 multicast traffic can be limited to specific port group using IGMP snoopingWe do not recommend implementing multicast flooding across all ports as a best practiceWe do not require layer 3 multicast