The document summarizes changes in datacenter storage technologies. It discusses typical storage types used today like DAS, SAN, and NAS and how new technologies are changing them. Technologies discussed include PCIe flash, all-flash arrays, denser drives, InfiniBand, and cloud storage. It suggests storage architectures may move away from RAID with new flash-based solutions and caching algorithms optimized for flash performance rather than spinning disks.
5. Types & Use Cases
DAS â Direct Attached Storage
â Boot / OS volumes
â Non-critical, low performing data
SAN â Storage Area Networks
â Critical and/or high performing data
â Shared storage for clusters (RAC, MS Failover Clustering, VMware)
â Boot From SAN â enables replicated OS volumes and statelessness
â Array-based replication
NAS â Network Attached Storage
â Unstructured data (files and folders)
â VMware and HyperV 2012 datastores can use NAS
â Database backup destination
â Array-based replication
6. DAS - Direct Attached Storage
Simplest type of datacenter storage
Includes spinning hard drives and flash
Connected by SAS, SATA, USB, PCIe (also
IDE, SCSI)
Limited by number of
devices, performance, availablity
7. SAN â Storage Area Network
Composed of:
â Storage arrays
â Host bus adapters â I/O cards in hosts allowing
SAN connectivity
â SAN switches â Connect all the pieces together.
Purpose built for storage connectivity
8. SAN â Storage Area Network
Storage Arrays:
â Purpose built
â Manage large amounts of storage
â Presented to multiple hosts
â Performance improvements built-in
âą Tiering across multiple drive types to maximize
performance and capacity for a given budget
âą Read/Write DRAM Cache and caching algorithms
â Full redundancy â data, connectivity, management
9. SAN â Storage Area Network
SAN Switches
â FC, iSCSI, or FCoE â The Great Debate
â Must be compatible with the storage array, ie: some arrays
wonât do some protocols
â FC (Fibre Channel) - purpose built for storage, mostly
implementing 8 Gb/s but some 16 Gb/s models available.
â iSCSI â rides on TCP/IP, *not lossless*, depends on
retransmits for packets dropped during heavy load
periods. Network design is crucial. Recommend isolating
from other network traffic. 10 Gb ethernet getting pretty
common. (Is it the future?)
â FCoE â rides directly on ethernet, not TCP/IP.
Lossless, uses DataCenter Bridging Protocol
10. SAN â Storage Area Network
SAN Switches - Analogy
FC - similar to railways.
Purpose built, connected to
predetermined specific
endpoints
iSCSI - similar to highways
Can be more flexible. Traffic
can be a problem.
11. SAN â Storage Area Network
Array-based replication:
â Moves replication CPU overhead off of the host
â Can improve RPO by maintaining a journal of
writes, allowing rollback to a specific point in time
â Simplifies management vs separate replications
for each database, filesystem or drive
â Can be used to populate a test environment or
backup server, duplicating the real Production
environment
12. SAN â Storage Area Network
Array-based replication:
â Application Integration
â Usually required for geographically dispersed
clustering
13. NAS â Network Attached Storage
NAS appliance â usually purpose built device
running a flavor of Linux and serving up file
shares and NFS exports from internal drives
Usually connects to existing server LAN
Operates via CIFS (SMB v2 and v3) and NFS
14. NAS â Network Attached Storage
Backups via NDMP, potentially reducing
backup times for filesysetms with large
number of files
Read/Writeable checkpoints
Application Integration
16. Changes to DAS
PCIe Flash â FusionI/O, VFCache, etc
â Local storage, integrated with SAN
â Very low response time
VMware Distributed Storage
â Aggregates local storage from vSphere hosts in a
cluster and presents that storage to all hosts in the
cluster as a datastore
â Quality of local storage could become more
important in the overall design
17. NAS
Hypervisor running on the NAS appliance
â VMware vSphere running on Isilon
â Very high bandwidth access to storage
SMB v3
â Not supported on every NAS appliance yet
â Usable by HyperV 2012 to store VMâs
â Usable by MSSQL to store database files
Windows VM as NAS? VMware VADP Change
Block Tracking (CBT) = Fast Backups
18. SAN
Infiniband becoming more common
â New (and existing) array technologies using
Infiniband for internal communication.
XtremeIO, XIV, etc
â New array technologies using Infiniband for
âCache Area Networkâ, read/write cache shared
between clustered hosts (Oracle RAC and SAP use
cases)
19. SAN
16 or 32 Gb FC and 40 Gb or 100 Gb Ethernet
(iSCSI)
â FC and Ethernet will continue to leapfrog. Emulex
already has an FCoE card that will do 40 Gbe + 16
Gb FC
Multi-hop FCoE
New startup companies shaking things up
â All flash arrays and hybrid arrays
â Next year should see acquisitions
20. Drive Architecture Changes
Enterprise Grade MLC Flash
â Less expensive per GB
â SLC will probably stick around for write
performance
Smaller drives going away
â Like the 72 GB drives of yesteryear, todayâs 300 GB
and 1 TB drives will be phased out. 600 GB + and
2 TB + will become the standard for spinning
drives
21. Drive Architecture Changes
RAID may no longer be the standard
â RAID designed for spinning drives. Workloads that
specify RAID type are usually considering head
location and locality of reference. RAID still needed
for spinning drives.
â Flash based arrays doing inline dedupe, pointer based
blockmaps and redirect-on-first-access instead of
Copy on Write.
â Caching algorithms traditionally sequentialize
incoming I/O requests to work better with spinning
drives. No longer necessary.
22. Cloud Based Storage
Lots of clouds:
PaaS, IaaS, SaaS, DBaaS, BaaS, DRaaS
â Most solutions donât require you to know the nuts
and bolts of the underlying storageâŠ
âŠBUT, we could soon see solutions involving
all flash arrays on premise, connected to
slower cloud-based storage.