IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
The best kept insider secret vmware vsphere cloud deployment webinar
1. THE BEST-KEPT INSIDER
SECRET: VMWARE VSPHERE 5
CLOUD DEPLOYMENT
MICHAEL HEFFERNAN, SOLUTIONS
PRODUCT MANAGER, VMWARE
PATRICK ALLAIRE, SENIOR PRODUCT
MARKETING MANAGER
,
2. WEBTECH EDUCATIONAL SERIES
STORAGE IN THE CLOUD SERIES
The Best-kept Insider Secret: VMware vSphere 5 Cloud Deployment
September 21, 9am PT, 12pm ET
‒ Learn why the industry’s most demanding customers are deploying clouds with the
storage virtualization leader. Hear Michael Heffernan, Hitachi Solutions Product
Manager, VMware, and Patrick Allaire, Senior Product Marketing Manager, give
you the inside information you need to understand why VMware vSphere 5 cloud
deployment on Hitachi infrastructure is the way to go.
Storage Virtualization: Delivering Storage as an Utility for the Cloud
September 28, 9am PT, 12pm ET
‒ Attend this informative session to learn how the Hitachi Command Suite can help
you meet the demanding storage requirements of private cloud computing.
MAINFRAME SERIES
Advances in Mainframe Storage, October 19, 9am PT, 12pm ET
Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm ET
Hitachi VSP Performance in a Mainframe Environment, November 2, 9am PT, 12pm
ET
3. THE BEST-KEPT INSIDER
SECRET: VMWARE VSPHERE 5
CLOUD DEPLOYMENT
MICHAEL HEFFERNAN, SOLUTIONS
PRODUCT MANAGER, VMWARE
PATRICK ALLAIRE, SENIOR PRODUCT
MARKETING MANAGER
,
4. AGENDA
Top VMworld 2011 myths
Storage Design and Architecture for vSphere
‒ VMware and Hitachi integration
‒ Hitachi AMS2000 and Virtual Storage Platform formula
‒ VMware storage APIs - VASA and VAAI
6. WHY NAS? (ADVANTAGES)
Flexibility and Cost Savings
‒ vSphere supports iSCSI, FC, FCoE and NFS (IP)
‒ All functionality of vSphere can be exploited over NFS with a few
exceptions
‒ Cannot cluster VMs using Microsoft Cluster Server
‒ Cannot boot the Physical host directly from NFS (requires some internal disk)
‒ No true multi-path I/O Engine (although network can provide fault tolerance)
‒ NFS/IP Ethernet perceived as less costly, less complex and more
flexible in deployment
‒ NFS provides a level of virtualization that enables it to abstract
some physical level constraints – simple provisioning, LUN queue
mgmt, VMFS SCSI reserves
‒ NFS provides the ability to dynamically re-size the VM Datastores
7. WHY NOT NAS? (DISADVANTAGES)
Reliability
‒ Failover is rapid and clean over Fibre Channel. NFS
implementations have higher timeouts.
‒ IP/Ethernet networks while redundant are not generally as robust
Performance
‒ While you can make NFS perform equal to FC for a given
workload with the right resources, it will consume more host CPU
15% upwards (substantial in sequential I/O)
‒ Native multi-pathing and load balancing (NFS cannot load balance
I/O within a datastore)
‒ VAAI enabled subsystems have addressed SCSI reserve issues
8. WHY HITACHI NAS FOR VSPHERE?
Hitachi NAS offers a highly scalable platform
‒ Up to 8 high performance nodes in a single cluster (almost 4X
more scalable than high end leading vendor)
‒ File systems can be extended to 256TB (up to 16X)
Hitachi NAS provides Tiered File System to maximize vSphere
performance.
‒ Accelerates metadata look ups that occur when processing
snapshot across many large VMDK files
Hitachi NAS provides JetClone to provide space efficient copies of
VMs
Hitachi NAS is VMware Certified
JetMirror provides object-based replication over WAN
9. AMS AND VSP STRONGER THAN OTHER NAS VENDOR
PURE NAS PLATFORMS HAVE WEAK BLOCK IMPLEMENTATION
Despite NAS having many software features and dedup, these platforms
do not have a robust Fibre Channel capabilities:
‒ Dual-node failover time (15-45 sec. outage) vs a VSP’s fault tolerant
architecture with a 100% data availability warranty
‒ Does not have active/active symmetric controllers load balancing like
Hitachi AMS2000
‒ LUNs are basically files on a file system, rather than native block
capability (liable to fragment over time)
‒ OS and parity checksum overheads result in lower useable capacity
‒ No integrated encryption offering
‒ Limited virtualization capabilities
‒ Vmware View Composer 5 support intrinsic inline dedup while
current primary storage dedup are post processing and its operation
impact host I/O response time
10. DEPLOYMENT RECOMMENDATION
Evaluate both options
It’s not about Block vs File
‒ Both are valid options that have strengths and weaknesses
Think of your storage as a service
‒ NFS is a valid option when layered on top of our Enterprise level block
platform
‒ NAS scalability is a must in large or high growth environment
‒ Storage is the bottleneck, use automated tiering to balance performance
and cost
11. Myth #2
Storage DRS and Profile-Driven storage support
tier 1 application requirements
12. Storage DRS and Profile-Driven Storage
Overview
High IO
Tier storage based on performance
Throughput
characteristics (i.e. datastore cluster)
Simplify initial storage placement
Load balance based on I/O
Tier 1 Tier 2 Tier 3
Benefits
Eliminate VM downtime for storage
maintenance
Reduce time for storage
planning/configuration
Reduce errors in the selection and
management of VM storage
Increase storage utilization by optimizing
placement
13. WHAT DOES STORAGE DRS PROVIDE?
•Storage DRS provides the following:
1. Initial Placement of VMs and VMDKS based on available space and I/O
capacity.
2. Load balancing between datastores in a datastore cluster via Storage
vMotion based on storage space utilization.
3. Load balancing via Storage vMotion based on latency.
• Storage DRS also includes Affinity/Anti-Affinity Rules for VMs &
VMDKs;
• VMDK Affinity – Keep a VM’s VMDKs together on the same
datastore. This is the default affinity rule.
• VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different
datastores
• Virtual Machine Anti-Affinity – Keep VMs separate on different
datastores
•Affinity rules cannot be violated during normal operations.
14. STORAGE DRS OPERATIONS – INITIAL
PLACEMENT
Initial Placement - VM/VMDK create/clone/relocate.
•When creating a VM you select a datastore cluster rather than an
individual datastore and let SDRS choose the appropriate datastore.
•DRS will select a datastore based on space utilization and I/O load
trend.
•By default, all the VMDKs of a VM will be placed on the same datastore
within a datastore cluster (VMDK Affinity Rule), but you can choose to
have VMDKs assigned to different datastore clusters.
2TB
datastore cluster
500GB 500GB 500GB 500GB
datastores
300GB 260GB 265GB 275GB
available available available available
15. STORAGE DRS OPERATIONS – LOAD BALANCING
Load balancing - DRS triggers on space usage & latency
threshold.
•Algorithm makes migration recommendations when I/O response time
and/or space utilization thresholds have been exceeded
•Space utilization statistics are constantly gathered by vCenter, default
threshold 80%
•Load Balancing is based on I/O workload and space which ensures that
no datastore exceeds the configured thresholds.
•Storage DRS will do a cost / benefit analysis!
•For I/O load balancing Storage DRS leverages Storage I/O Control
functionality
16. STORAGE DRS WORKFLOW
DRS
I/O load trend is evaluated triggers
every 8 hours
based on a past day history
Default threshold 15ms DRS
triggers
POI
DRS
triggers
NT
17. DRS WITH A TIER 1 APPLICATION
Customer processing sample for SAP with an Oracle Database
Average usage of 21%, peak size 3-5x average
100
80
Utilization
60 DRS SAMPLING
OFF SAMPLING
SAMPLING SAMPLINGSAMPLING OFF
DRS
40
20
0
8:00
9:00
1:00
2:00
5:00
6:00
20:00
10:00
13:00
0:00
4:00
14:00
21:00
22:00
23:00
17:00
18:00
12:00
16:00
Time of day
Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7
18. Myth #3
Storage feature like automated sub-LUN tiering no
longer makes sense with vSphere 5 Storage DRS
19. STORAGE DRS VS AUTOMATED SUB-LUN TIERING
1. DRS - Eliminate VM downtime for
DRS
storage maintenance
triggers
- Reduce time for storage
planning/configuration
- Reduce errors in the selection and
management of VM storage
Datastore
DRS
triggersP
DRS
triggers
cluster - Increase storage utilization by
OI
N
T optimizing placement
Page IO Weights
& Tier Ranges
2. SUB-LUN TIERING
- Virtualize devices into a pool of
capacity and allocate by pages
Page relocations
- Eliminate allocated but unused
waste by allocating only the pages
Cycle
that are used
- Optimize storage performance by
spreading the I/O across more
Monitor physical IO Datastore arms
to pages cluster
- Simplify management tasks
- Further reduces OPEX
- Further Improves Return on Assets
20. It’s Time to Rethink Storage Design &
Architecture for vSphere 5…
22. VMware Storage API’s
vStorage APIs for Array Integration Storage
vMotion
VMware ESXi 5.0
Provision VMs
From Template
API’s Improve
Thin Provisioning
Disk Performance
VMFS Share
Storage Pool
Scalability
Dead Space
Reclamation
It is all about the ecosystem
Standardization and open for all vendors
OS is API-driven which eliminates custom plug-ins into the OS
APIs leverage each other under the covers
23. vStorage API for Array Integration
Write Same, Zero (Block Zeroing) Full Copy (Xcopy)
Eliminates redundant and repetitive Leverages storage array’s ability to mass
write commands, which means less I/O for copy, snapshot and move blocks via SCSI
common tasks commands.
Benefit: Speeds provisioning of new VMs; Benefit: Speeds up cloning and storage
key to supporting large scale VMware or vMotion; allows for faster copies of VMs
VDI deployments
Hardware-assisted Locking Thin Provisioning (vSphere 5.0)
Stop locking LUNs; start locking blocks only. TP-STUN - Error Code to Report “Out of
Offloads SCSI commands to storage array. Space” for Thin Volume
Benefit: Removes SCSI reservation UNMAP – Zero Page Reclaim for Virtual
conflicts; enables faster locking; improves Disks in conjunction with using “Write
VM density performance Same” command on Thin Volume
*Note: VAAI is currently supported on the Hitachi Adaptable Modular Series 2000 family, VSP & USPV/VM.
# Thin Provisioning API will be supported with ESXi 5.0
25. Block Zeroing – VSP Test Result
Block Zeroing
Write-same functionality – Storage array write
content of a logical block to range of logical
block, external virtualized storage
Benefits
Eliminate redundant and repetitive write
commands
96 to 98% Improvement
Provisioning 160GB EagerZeroedThick VMDK in
HDP Volumes
HDP Pool
VSP Storage VAAI Status Time
Usage
OFF ~160GB 00:06:05
Internal
ON .6GB 00:00:12
LUN – Internal or
Virtualized Storage OFF ~160GB 00:15:15
Virtualized
Storage ON .6GB 00:00:23
26. vSphere 5 introduces VMFS-5 with massive improvements
Feature VMFS-3 VMFS-5
2TB+ VMFS Volumes (up to 64TB) Yes Yes
(using extents)
Support for 2TB+ Single VMFS No Yes
Unified Block size (1MB) No Yes
Atomic Test & Set Enhancements No Yes
(part of VAAI, locking mechanism)
Sub-blocks for space efficiency 64KB (max ~3k) 8KB (max ~30k)
Small file support No 1KB
VMFS-5 will leverage further Hitachi’s Thin Provisioning Technology
27. REMOVE LAYERS OF COMPLEXITY
A Single 1PB Liquid Pool of Storage Capacity
for All Your Virtualized Storage
UP TO 60TB
SINGLE VMFS
VOLUME
Let the storage hardware do all the work
28. Closer Integration of Applications and Storage Needed for
Data Center Transformation
The need for integration
• Applications have a software view and have no visibility into infrastructure
• Storage has an infrastructure view and no visibility into applications
ESXi 5.0 64TB
Software View Storage View
Single VMFS
VM VM VM
VM VM
VM
VM
VM VM
VM VM
VM
HDP Volume
vMotion vMotion VM
(Virtual LUN)
VM VM
VM VM VM VM VM VM VM
VM
2TB VMFS Volume
VM VM VM VM VM VM VM VM VM VM VM VM
VM VM VM VM VM VM VM VM VM VM VM VM
VM VM VM VM VM VM VM VM VM VM VM VM
ESX ESX ESX ESX ESX HDP / HDT
Pool
LDEV LDEV LDEV LDEV LDEV LDEV LDEV LDEV
LDEVs
LU
29. Hitachi Dynamic Provisioning (HDP) Internal, External
Virtualized Storage
Thin provisioning:
A powerful form of storage virtualization
An Example with thin provisioning + VAAI:
A 60TB VMFS volume is created in a 1PB HDP* pool
A bunch of VMDK’s are created consuming only 5.3TB
The other 54.7TB is available for other applications
Additionally for space efficiency + performance:
Single virtual disk of 31GB consumes only 1GB capacity
vSphere 5.0 reclaims dead space automatically when a
virtual disk is deleted or vMotion’ed
54.7TB
60TB 1GB
5.3TB 31GB
30. The Hitachi AMS 2000 Formula – vSphere 5.0
Hitachi AMS 2000 Family
Cluster VMware
vCenter Server
VMware ESXi VMware ESXi VMware ESXi
Native Multipathing (NMP) – Round Robin Profile-driven
Storage
+
Storage DRS
Active / Active Symmetric Controller
vStorage API for Storage
Awareness (VASA)
vStorage API for Array Integration
(T10 – 5 x Primitives)
Hitachi Dynamic Provisioning
Up-to Up-to
60 TB 60 TB
VMFS-5
31. HITACHI VIRTUAL STORAGE PLATFORM FORMULA
– vSPHERE 5.0
256 VMFS Cluster
VMware vCenter
Volumes VMware ESXi VMware ESXi VMware ESXi Server
per ESXi Native Multipathing (NMP) – Round Robin
Host
Profile-driven
Cluster Storage and
Storage DRS
vStorage API for Array
vStorage API for Storage
256 x 60TB = 15.36PB Integration
VMFS Datastores +
Awareness (VASA)
Hitachi Dynamic
Provisioning
Externalize 60TB 60TB 60TB 60TB 60TB 60TB 60TB
up to VMFS-5
255PB EMC DMX
Thunder Lightning
AMS 2000 CLARiiON IBM DS
9585V™ 9980V™
32. THE BOTTOM LINE
HITACHI DATA SYSTEMS AND VMWARE TOGETHER
Lower your costs
Accelerate your time to value
Transform your data center
34. UPCOMING WEBTECH SESSIONS
September Cloud Series
Storage Virtualization: Delivering Storage as an Utility for the
Cloud, September 28, 9am PT, 12pm ET
Mainframe Series
Advances in Mainframe Storage, October 19, 9am PT, 12pm ET
Replication in a Mainframe Storage Environment, October 26, 9am PT, 12pm
ET
Hitachi VSP Performance in a Mainframe Environment, November 2, 9am
PT, 12pm ET
Please check www.hds.com/webtech next week for more information and for:
Link to the recording, the presentation and Q&A (available next week)
Schedule and registration for upcoming WebTech sessions
So.. Is it something specific to NetApp, or is all NAS (regardless of vendor) able to enjoy certain benefits or advantagesvSphere itself supports multiple protocols – iSCSI, FCP and NFS. Generally speaking VMware is agnostic, although they do recognize that FCP provides the most I/O throughput at the expense of the least CPU overhead (everything is offloaded to the HBA). But is it that significant? There are a few things NAS can’t do on vSphere – you can’t using MS Clustering (that requires an RDM or raw device mapping – Microsoft uses SCSI reserves as part of their clustering mechanism). To be honest though, most people don’t cluster their VMs – they use clustering and advanced capabilities of vSphere (DRS/vMotion) to provide HA.You obviously can’t boot from SAN if you’re not using it.. You can’t boot a vSphere machine from an NFS mount. There’s no real multi-path I/O engine – while the network can provide fault tolerance and you can port channel NICs, TCP sessions aren’t load balanced like a pair of FCP adapters and a multi-path I/O driver. Many however feel that this isn’t a significant issue, especially with 10GigE. What people like about NFS is it’s flexibility and ubiquity. The IP network is much more flexible than the SAN, the skill set is common and on a per port basis it’s less expensive. But there are other benefits.. In the fibre channel world I have to think about and manage LUNs, how many VMs per LUN to manage queue depth and until recently I had to worry about VMFS SCSI reserve locking (VAAI addressed that one). In the NFS world all I have to do is create a file system and stick some files on it. On NFS VMs are files.. If I need to expand or shrink the datastore it’s an easy think to do.. VMFS lun re-sizing not to simple.
So.. Is it something specific to NetApp, or is all NAS (regardless of vendor) able to enjoy certain benefits or advantagesvSphere itself supports multiple protocols – iSCSI, FCP and NFS. Generally speaking VMware is agnostic, although they do recognize that FCP provides the most I/O throughput at the expense of the least CPU overhead (everything is offloaded to the HBA). But is it that significant? There are a few things NAS can’t do on vSphere: Cannot cluster VMs on VMFS using Microsoft Cluster ServerCannot boot the Physical host directly from NFS (requires some internal disk)No true multi-path I/O Engine (although network can provide fault tolerance)MS Clustering requires an RDM or raw device mapping. Most people don’t cluster their VMs – they use clustering and advanced capabilities of vSphere (DRS/vMotion) to provide HA.You obviously can’t boot from SAN if you’re not using it.. You can’t boot a vSphere machine from an NFS mount. There’s no real multi-path I/O engine – while the network can provide fault tolerance and you can port channel NICs, TCP sessions aren’t load balanced like a pair of FCP adapters and a multi-path I/O driver. Many however feel that this isn’t a significant issue, especially with 10GigE. What people like about NFS is it’s flexibility. The IP network is much more flexible than the SAN, the skill set is common and on a per port basis it’s less expensive. But there are other benefits.. In the fibre channel world I have to think about and manage LUNs, how many VMs per LUN to manage queue depth and until recently I had to worry about VMFS SCSI reserve locking (VAAI addressed that one). In the NFS world all I have to do is create a file system and stick some files on it. On NFS VMs are files.. If I need to expand or shrink the datastore it’s an easy think to do.. VMFS lun re-sizing not to simple.
3200 provides up to 8 high performance nodes in a single cluster (almost 4X more scalable than high end leading vendor)Hitachi NAS is VMware CertifiedvSphere Site Recovery Manager (SRM)/Storage Replicator Adapter (SRA) available for IDR/IBR to support automated failoverJetMirrordeliers:Faster and more efficient than file or block-based replicationEnables more data to be protected each daySince vSphere deployment turns into IT consolidation project, which in turn drives need for performance, scalability and reliability ->Hitachi NAS is built on the industry’s most reliable block platforms
LUNs are basically files on a file system, rather than native block capability (liable to fragment over time)
Goal of this Slide: To introduce: Data drives our world – and information is the new currency. It is the guiding theme for this presentation. This statement embodies the Hitachi Data Systems philosophy and the overall vision and strategy for our products and solutions long-term. Ultimately, this theme of data and information at the center of the economy, of your business, informs how we think about developing our products, our roadmap, and our solutions and services. Speaker’s Notes: (Note to Presenter): As the slide fades in read the slide’s content at a quick pace or ad lib, but emphasize the key takeaways (Note to Presenter):The following slides are meant to be presented at a reasonable pace to capture the audience’s attention, and focus it on several key supporting themes which are introduced in the slides that follow The central theme of what we do at Hitachi Data Systems is: Data drives our world – and information is the new currency. Our customers trust Hitachi Data Systems to protect their data, the raw resource that lies at the heart of your business, whether you’re a <Note to Presenter: Say the type of company your audience is part of>, or Financial Services, Healthcare, Hospital, Media, Entertainment, Retail, the list goes on – Every industry has been transformed or is in the midst of being transformed and perhaps the most common thread running throughout is the value and importance of data, and by extension, information, which is data with meaning, context, value. The following quote from Pete Gerr, Hitachi Data Systems Strategic Marketing Principal from his blog expands on this theme: “Data, the raw resource of today’s economy, is like its analogs in other industries and eras of history: coal, iron, oil, water. However, data itself has little intrinsic value – though its potential value is enormous. Information is data imbued with value. Organized for purpose. Instilled with meaning and context. Ripe for monetization.” (NEXT SLIDE)
Accelerate VM storage placement decision to a storage pod by:Capturing VM storage SLA requirementsMapping to the storage with the right characteristics and spare space
So how exactly does storage drs solve these problems? How will it make your life better?As mentioned first of all and foremost Initial Placement of virtual machines and vmdks. This placement is based on Space and I/O capacity. Storage DRS will select the best datastore to place this virtual machine or virtual disk in the selected Datastore ClusterWhen Storage DRS is set to fully automatic, it will do automated load balancing actions. Of course this can be configured as manual as well and that is actually the default today. Load balancing again is based on space and I/O capacity. If and when required Storage DRS will make recommendations based space and I/O capacity. It will however only do this when a specific threshold is reached.So what is this datastore cluster?
Storage DRS provides initial placement recommendations to datastores in a Storage DRS-enabled datastore cluster based on I/O and space capacity. During the provisioning of a virtual machine, a datastore cluster can be selected as the target destination for this virtual machine or virtual machine disk after which a recommendation for initial placement is done based on I/O and space capacity. As just mentioned Initial placement in a manual provisioning process has proven to be very complex in most environments and as such important provisioning factors like current I/O load or space utilization are often ignored. Storage DRS ensures initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Although people are really excited about automated load balancing… It is Initial Placement where most people will start off with and where most people will benefit from the most as it will reduce operational overhead associated with the provisioning of virtual machines.
Ongoing balancing recommendations are made when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. These thresholds are typically defined during the configuration of the datastore cluster. Storage DRS utilizes vCenter Server’s datastore utilization reporting mechanism to make recommendations whenever the configured utilized space threshold is exceeded. I/O load is evaluated by default every 8 hours currently with a default latency threshold of 15ms. Only when this I/O latency threshold is exceeded Storage DRS will calculate all possible moves to balance the load accordingly while considering the cost and the benefit of the migration. If the benefit doesn’t at least last for 24 hours Storage DRS will not make the recommendation.
DRS is great to workload trendsThe DRS vMotion recommendation default is manual, the vCenter administrator need to assess if a VM or VMDK can be move without causing any business impact….. remote replication is the limiting factor to take into account
Goal of this Slide: To introduce: Data drives our world – and information is the new currency. It is the guiding theme for this presentation. This statement embodies the Hitachi Data Systems philosophy and the overall vision and strategy for our products and solutions long-term. Ultimately, this theme of data and information at the center of the economy, of your business, informs how we think about developing our products, our roadmap, and our solutions and services. Speaker’s Notes: (Note to Presenter): As the slide fades in read the slide’s content at a quick pace or ad lib, but emphasize the key takeaways (Note to Presenter):The following slides are meant to be presented at a reasonable pace to capture the audience’s attention, and focus it on several key supporting themes which are introduced in the slides that follow The central theme of what we do at Hitachi Data Systems is: Data drives our world – and information is the new currency. Our customers trust Hitachi Data Systems to protect their data, the raw resource that lies at the heart of your business, whether you’re a <Note to Presenter: Say the type of company your audience is part of>, or Financial Services, Healthcare, Hospital, Media, Entertainment, Retail, the list goes on – Every industry has been transformed or is in the midst of being transformed and perhaps the most common thread running throughout is the value and importance of data, and by extension, information, which is data with meaning, context, value. The following quote from Pete Gerr, Hitachi Data Systems Strategic Marketing Principal from his blog expands on this theme: “Data, the raw resource of today’s economy, is like its analogs in other industries and eras of history: coal, iron, oil, water. However, data itself has little intrinsic value – though its potential value is enormous. Information is data imbued with value. Organized for purpose. Instilled with meaning and context. Ripe for monetization.” (NEXT SLIDE)
While DRS is a great first step and does assist in initial placement of a VM or VMDK; it is limited to performance trending, it can not support tier 1 application on its own.Storage DRS is limited to the datastore clusterSampling means translate into one daily checkvMotion is neabled in Manual mode due to RPO/RTO impact With the assistance of Hitachi Dynamic Tiering, the implementation provide a much greater agility and resilience required by tier 1 application
Goal of this Slide: To introduce: Data drives our world – and information is the new currency. It is the guiding theme for this presentation. This statement embodies the Hitachi Data Systems philosophy and the overall vision and strategy for our products and solutions long-term. Ultimately, this theme of data and information at the center of the economy, of your business, informs how we think about developing our products, our roadmap, and our solutions and services. Speaker’s Notes: (Note to Presenter): As the slide fades in read the slide’s content at a quick pace or ad lib, but emphasize the key takeaways (Note to Presenter):The following slides are meant to be presented at a reasonable pace to capture the audience’s attention, and focus it on several key supporting themes which are introduced in the slides that follow The central theme of what we do at Hitachi Data Systems is: Data drives our world – and information is the new currency. Our customers trust Hitachi Data Systems to protect their data, the raw resource that lies at the heart of your business, whether you’re a <Note to Presenter: Say the type of company your audience is part of>, or Financial Services, Healthcare, Hospital, Media, Entertainment, Retail, the list goes on – Every industry has been transformed or is in the midst of being transformed and perhaps the most common thread running throughout is the value and importance of data, and by extension, information, which is data with meaning, context, value. The following quote from Pete Gerr, Hitachi Data Systems Strategic Marketing Principal from his blog expands on this theme: “Data, the raw resource of today’s economy, is like its analogs in other industries and eras of history: coal, iron, oil, water. However, data itself has little intrinsic value – though its potential value is enormous. Information is data imbued with value. Organized for purpose. Instilled with meaning and context. Ripe for monetization.” (NEXT SLIDE)
Speaker’s Notes:Of the three VAAI features, Hardware-Assisted Locking has perhaps the biggest impact on VMware environments – addressing one of the key challenges organizations face in managing and scaling their storage environments for VMware.While many customers think the management and scaling challenges they are face with VMware are a storage issue, it is really SCSI Reservation that is the real culprit, resulting in: Smaller LDEV size (smaller volumes, fewer VMs) - creating a guessing game of how many VMs could be supported per volume Fewer vMotions – due to the performance impact on the storage (customers implemented change control for vSphereAdmins) Fewer servers accessing the shared LDEV - in an effort not to introduce SCSI reservesSCSI Reservation also affected how organizations power-on servers to reduce boot storms and how backups are done (attempts to stagger them) to avoid performance problem.NOW, with Hardware-Assisted Locking, the ESX hosts are freed up – allowing for much greater densities, improved I/O, etc. The days of spending many hours/weeks/months to designing storage for VMware are gone. Provision up to 2TB Volumes and forget about it.
While scale up storage is a basic requirement to support server virtualization, tighter integration of application and storage virtualization can help to further transform the data center.Why do we need this integration?Applications only have a software view of the infrastructure. It does not know about the infrastructure that lies behind the representation of a LUN and it does not know how to make any changes to that infrastructure.The storage knows about the infrastructure but not what lies beyond the LUN that it lies on the application side of the LUN.
Speaker’s Notes: A highly functioning VMware environment necessitates a flexible, adaptive storage environment – one that integrates with the virtual server environment, is able to dynamically pool and tier resources (performance and capacity) to applications as needed and scales within the same footprint to meet current and future application demands.
Speaker’s Notes: A highly functioning VMware environment necessitates a flexible, adaptive storage environment – one that integrates with the virtual server environment, is able to dynamically pool and tier resources (performance and capacity) to applications as needed and scales within the same footprint to meet current and future application demands.
More VMDKs on fewer arraysEnable greater efficiency on existing arraysHighest utilization of assetsHDT and HDP enable performance using lower cost disk; users only consume what they needReduce the number of management and replication tools - heterogeneous storage with homogeneous management and replicationFaster and simplified provisioningEasiest to optimizeZero-downtime migrationsLeverage existing assets – no need for a rip and replaceUses existing people, processes, and technology – this is not a paradigm shiftCustomers can roll out new technologies and upgrades as quickly or as conservatively as they want, in a phased approachInvestment protection – storage virtualization enables ITaaS, moving to that model now prepares customers for the futureHitachi and Vmware together lower your costs and accelerate your time to value