VMworld 2013
Sachin Manpathak, VMware
Mustafa Uysal, VMware
Sunil Muralidhar, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
08448380779 Call Girls In Friends Colony Women Seeking Men
VMworld 2013: Storage DRS: Deep Dive and Best Practices to Suit Your Storage Environments
1. Storage DRS: Deep Dive and Best Practices to Suit
Your Storage Environments
Sachin Manpathak, VMware
Mustafa Uysal, VMware
Sunil Muralidhar, VMware
STO5636
#STO5636
2. 22
Disclaimer
This session may contain product features that are
currently under development.
This session/overview of the new technology represents
no commitment from VMware to deliver these features in
any generally available product.
Features are subject to change, and must not be included in
contracts, purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new technologies or features
discussed or presented have not been determined.
3. 33
VMware Vision: Software Defined Storage
Software Defined Storage
Software-Defined Storage Vision
Enable new storage tiers
Enable DAS & server flash for shared
storage along with enterprise SAN/NAS
Enable tight integration with storage
ecosystem
Tighter integrations with broad storage
ecosystem through APIs
Deliver policy-based automated storage
management
Automatically enforce per-VM SLAs for all
apps across different types of storage
“Gold”
Array(s)
“Silver”
Array(s)
Distributed
Storage
Hard
disks
SSD Hard
disks
SSD
Availability = 99.99%
DR RTO = 1
“Gold” SLA
Availability = 99%
Throughput = 1000 R/s, 20 W/s
Latency = 95% under 5 ms
DR RPO = 1’, RTO = 10’
Back up = hourly
Capacity res = 100%
Web Server
Database Server
Availability =
99.99%
DR RTO = 1 hour
Max Laten
“Silver” SLA
Availability = 99%
Throughput = 100 R/s,10 W/s
Latency = 90% under 10 ms
DR RPO = 60’, RTO = 360’
Back up = weekly
Security = encryption
ReduceStorageCostandComplexity
App Server
Roadmap
4. 44
Software-Defined Storage: Summary Roadmap
vSphere storage
features
Storage IO Control,
Storage vMotion,
Storage DRS,
Profile Driven Storage
Enable New
Storage Tiers
Policy-based storage
management
Virtual Volumes
VM-aware data
management with
enterprise storage
arrays
Tight integration with
storage systems
Policy-based storage
management
For local storage
vSphere Storage
Appliance
Low cost, simple shared
storage for small
deployments
Virtual SAN
Policy-driven storage for
cloud-scale deployments
Virtual Flash
Virtual SAN
Data services
Virtual Flash
Write-back caching
Policy-based storage
management
For external storage
H2 2013 / H1 2014 RoadmapToday
Roadmap
5. 55
Outline
Introduction
Anatomy of Storage DRS
Best Practices and Deployment Scenarios
Preview from Storage DRS Labs
Summary
Survey: http://bit.ly/siocsdrs
6. 66
Ease of Storage Management
Initial Placement
Out of Space Avoidance
IO Load Balancing
Virtual Disk Affinity (Anti-Affinity)
Datastore Maintenance Mode
Add Datastore
Brief Introduction to Storage DRS
Datastore
Cluster
Storage vMotion
•••
7. 77
Storage DRS details
VMworld talks
Storage DRS Whitepapers
VMware Technical Journal (2012)
“Storage DRS: Automated Management of
Storage Devices in a Virtualized Datacenter”
9. 99
Storage DRS Recommendations
Recommendation: best datastore for a virtual disks in a VM
VM requirements, virtual disk type, capacity, IO load, rules
Datastore capabilities, capacity, performance, connectivity
Predicted resource usage
10. 1010
What Really Happened?
Simulated placement of virtual disks to datastores
• Space utilization, IO latency, CPU and memory
Rank is based on cluster wide metrics after placement
• All resources contribute to balance metric
11. 1111
Thin Provisioned VMDKs
Space entitlement = Allocated + ƒ(Idle)
Explicit control for the degree of space over-commitment
• Initial placement also uses the same controls
Online model to predict space usage growth over time
Datastore A Datastore B
VMDK VMDK
Big VMDK
Allocated space
Provisioned space
“Idle” space
10 100
Headroom
30
12. 1212
Datastore Cluster Fragmentation
Enough room at cluster level
Big VMDK does not fit to any of the datastores
Pre-requisite migrations to make room for the Big VMDK
All dependent actions executed before placement
Datastore A Datastore B Datastore C
VMDK VMDK VMDK
Big VMDK
16. 1616
Why is a Recommendation Generated?
Storage DRS runs periodically for resource management
Storage DRS threshold violation in a datastore
• Not enough free space
• I/O latency was high for an extended period of time
One of the affinity rules are broken
• A rule changed or a new rule added
Storage DRS estimates the benefits exceed the costs
• Cluster resources are balanced across multiple metrics
18. 1818
Datastore Cluster Best Practices
Identical storage profiles
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-A (Tier2 VMs) Cluster-B (Tier1 VMs)
Similar datastore performance
May not be identical
Similar capabilities
Data management
Backup
Stay Tuned
for Labs
Section
✔
Cluster1: Wide Perf
Variation
Cluster2: Similar
Datastores
19. 1919
Datastore and Host Connectivity
Maximum possible host and datastore connectivity
Improves DRS and Storage DRS performance
Partially Connected Datastore Cluster Fully Connected Datastore Cluster
More datastores in cluster better space and I/O balance
Larger datastore size better space balance
DRS Cluster DRS Cluster
20. 2020
Deployment with Shared Disk Pools
Common scenario
• Recommended by vendors
• Improves IO performance
Common Diskpool
Logical LUNs
share disks
Storage DRS discovers correlations
VASA or automatic detection
Storage DRS respects correlations
IO Load balancing
Rule enforcement
⤬VM IO Performance correlated
• VMs reside on different LUNs
High I/O
High Latency
21. 2121
Deployment with Thin Provisioned LUNs
Storage array feature
Add capacity on demand
Configured 9TB
Backing 3TB
Configured 9TB
Backing 6TB
Lun-1 on 08/29/13 Lun-1 on 10/29/13
Data
⤬Problem:
⤬ Backing space can run out
⤬ LUN has spare capacity!
Configured 9TB
Lun-1 on 08/29/13
Backed by Disks
Configured 9TB
Lun-1 on 09/29/13
Storage Array signals condition using VASA
Storage DRS stops placing VMs on such LUN
Stay
Tuned
for Labs
Section
22. 2222
Deployment with Auto-Tiered Arrays
Multiple storage tiers
VM data across tiers
Tier use changes with workload Capacity Tier
Performance Tier
Logical LUN of Auto-tier Array
Storage DRS IOPS prediction
• Maybe inaccurate
Storage DRS is valuable in
auto-tier array deployments!
Automatic initial placement
Space load balancing
Rule enforcement
Maintenance mode
Storage IO Control
IO priority
23. 2323
Deployment with Deduplication
Provides space efficiency
Dedupe pool can span across
multiple LUNs
Dedupe
Storage DRS uses free space in LUN
Stay Tuned
for Labs
Section
⤬Problem: LUN appears to
store more data than capacity!
Total Virtual Disk
Size: 4TB
LUN Capacity:
1TB
26. 2626
Preview from the Storage DRS Labs
Evolve Storage DRS with vSphere storage solutions
Evolve Storage DRS with storage innovations
I/O reservation support
Fine grain controls
27. 2727
vSphere SRM: Array-based Replication
Storage DRS identifies replicated datastores
All recommendations are in sync with replication policies:
• Automated moves within the same consistency group
• Manual moves for all VMs residing on replicated datastores
Accounting of replication overhead due to Storage vMotion
28. 2828
vSphere Replication (VR)
Storage DRS discovers VR-replicas in datastores
Storage DRS understands space usage of replica disks
Storage coordinates moves with VR
• Space balancing
• Maintenance mode
29. 2929
vSphere Storage Policy based Management
Current: datastores with same
storage profile
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-1 (Tier2 VMs) Cluster-2 (Tier1 VMs)
Future: datastores with
any storage profile
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-1 (Tier1 + Tier2 VMs)
30. 3030
Support for IO Reservations
Per VM Resource Controls
• Reservation, Limit, Shares
Enforced at datastores
Enforced at datastore clusters
Storage DRS initial placement
Storage DRS load balancing
IO Capacity estimation
• Reference workload
SIOC SIOC
R=100IOPs R=150 IOPs
Storage DRS
R=300 IOPs
C=400 IOPs C=1500 IOPs
31. 3131
Tighter Integration with Storage Arrays
1. Discover storage capabilities using VASA
• E.g. LUNs with auto-tiering/dedupe/thin provisioning
• Indicate LUNs with common diskpool.
2. Intelligent decisions in Storage DRS
• Proactively manage backing capacity for thin provisioning
• Keep deduplicated VMs together
• Don’t interfere with auto-tier I/O optimizations
• Storage DRS fixes I/O overload conditions