Let’s take a look at another example. This time we are moving the data that belongs to an application – due to the lease expiration of a disk system for example. In a traditional SAN, you can see that we still have that static connection between the host system and the physical disk. In order to move the data that belongs to the application we have to…
(click)
Stop the application
(click)
Move the data
(click)
Reestablish a new set of static connections between the host and the new disk system
(click)
And restart the application.
Lease expiration meant you had to make a change in the physical infrastructure. The host system had to adapt – and as we discussed – that meant disruption to the business application.
(click)
With a virtualized environment, however, things are very different. The host system is dealing with a virtual disk – not the physical disk that went off lease. So, all the administrator has to do is to tell the SAN Volume Controller to remap the data on the virtual disk to another physical disk. Watch.
(click)
The host system and the business application running on it doesn’t have any idea you made a change in the physical infrastructure. No adapting – no outages.
(click)
x
Now, lets take a closer look at the EXP3512 and EXP3524 expansion units and how they are cabled behind a DS3500 storage system.
The EXP3512 and EXP3524 have a lot of the same characteristics. The are both 2U SBB compliant enclosures that can house either two controllers or two ESMs. With the ESMs the support up to three SAS drive connections and they both support hot swappable customer replaceable units, or CRUs. The only key difference is the drives each of these expansion units house. The EXP3512 support 12 3.5-in drives and the DS3524 supports up to 24 2.5-in drives.
The DS3500 can support a mix of these enclosures with the only limitation that it can support up to a maximum of 96 drive slots. To cable these expansion units it is important to use the top-down, bottom-up cabling method. This ensures that if any of the ESMs or a controller goes offline, access to all the enclosures and data continues without interruption. In this image, we are using the top-down, bottom-up cabling method with the seven EXP3512 expansion units. If you we cabling all EXP3524 expansion units, you would need only three behind the DS3500 to go up to 96 drives
Trial period expiration alerts
Warning – Starting at 30 days remaining, and once per day until trial expiration
Critical – 3 days remaining before trial expiration (only delivered if the feature is actually “in use”)
DS5K (DS3950, DS5020, DS5100, DS5300):No T&B options
-One time per feature per system
Here we look at “Redefining RAID”.
Dynamic Disk Pools provides a dramatic reduction in rebuild times. What once took sometimes days to rebuild now finishes in minutes.
Storage management is simplified with DDP. RAID, Hot Spares, Parity and Expansion are all handled automatically. The user simply needs to specify the size of volume or volumes needed.
Performance is more consistent and stable across various failure scenarios. How data is handled during drive failures provides a more consistent level of performance.
At issue: As drive capacities get larger and larger, the time it takes traditional RAID systems to rebuild after a failure to a idle spare is getting longer and longer. This is because in traditional RAID the idle spare gets all the write traffic during rebuild, slowing down the system and the data access while this is going on. Some times as much as 40% and up to 4 days! And as drive capacities increase, the rebuild time is going to go up, for the new 4TB drives something on the order 5.5 days! Can you imagine an entire week Monday – Friday?
The magic of Dynamic Disk Pools is that effective well performing systems can be made up any number of drives. No power of 2s, or even numbers or multiples of some number of disks. Any number drives will work above the 11 drive minimum. When a drive is added, DDP rebalances data across the available drives. When a drive is lost, DDP rebalances data across the available drives.
What Dynamic Disk Pools means to your business:
1) Your business will probably not notice a drive loss
2) Your IT staff can deal when convenient – no “storage emergencies”
With traditional RAID logical drives, drives are organized into groups called Arrays. Here we two Arrays of 10 drives in an 8+2 RAID 6 configuration with 4 hot spares. Hot spares in this configuration are idle and unused until a drive failure event.
When a drive fails, one of the hot spares is picked, and the failed drive data is reconstructed onto this hot spare drive. This causes a bottleneck of I/O while the rebuild is sequentially being recreated. Access to the logical drive with the failure is significantly diminished during this time.
Taking the same logical drives in the last example and putting them into a disk pool of 24 drives gives us a setup like this. All drives are active, there are no idle drives. And the hot spare capacity is distributed throughout the pool.
And if we encounter a drive failure, the data of the failed drive is rebuilt and redistributed across the remaining drives in the pool. This rebuild is performed in parallel. Thus greatly increasing the speed of recovery.
Lets look at another example. Here each color represents a D-Chunk. As you can see they are created pseudo-randomly throughout the disk pool. These are all different D-Chunks being used for the same logical drive. Each D-Chunk is distributed across 10 drives.
If we now fail one of the drives we can see how the other pieces of the same chunk are used to recreate that RAID 6 piece onto another drive.
Because we have multiple D-Chunks, each chunk can have its piece recreated simultaneously. In effect we have multiple RAID 6 objects that are affected by the same failed drive. And each of these RAID 6 objects are independent from one another allowing simultaneous rebuilds.
Adding a replacement drive back into the pool is merely treated as an expansion operation and pieces of multiple chunks will get distributed automatically to the new drive.
As a review, Arrays and Logical Drives in a traditional RAID configuration are optimized for enclosure utilization but hot spares remain idle. DDP combines data and spares distributing them within the pool. Resulting in easier administrative tasks and better utilization of disk resources.
Rebuilding a failed drive with traditional RAID generates hot spots and significantly impacts performance. DDP uses all drives to reconstruct failed data across the pool allowing for parallel processes, significantly reducing rebuild operations. Resulting in higher data protection and better overall performance.
Array expansion with traditional RAID modifies stripe width which directly impacts tuned performance. DDP maintains consistent stripe width and auto redistributes data onto newly added drives. Resulting in both faster expansion completion times as well as consistent stripe width yielding consistent performance.
Thin provisioning is a technical solution to a human problem. That human problem is that people don’t know for sure how much storage space they were going to need, so they rounded up, often doubling or more their guesses. Every time the estimate was passed from department to department (Production Applications to Storage Administration) the storage space estimate was inflated even more. This is because for decades running out of storage space was a seriously bad thing for applications and it was extremely painful and time consuming to provision more storage. So 5X or more over provisioning for applications is still not uncommon.
With Thin Provisioning, you set a starting and maximum capacity, and forget it. As the storage is used, additional capacity is allocated to the starting capacity, automatically, dynamically
Thin Provisioning eliminates overprovisioning of storage by automatically allocating storage internally only as it is actually used while reporting full allocations to hosts, significantly lowering storage use.
Requires Dynamic Disk Pools, but at volume creation time you simply check a “thin provision this volume” box and put in the starting and maximum space. After that autogrow takes care of all the rest.
VAAI is an API with a set of primitives that allow offload of specific disk-storage related functions from an ESX hypervisor to the storage array controllers. The API primarily provides infrastructure-related performance enhancements that benefit the compute side of the cluster with improved scalability and resource density.
With large hypervisor-based implementations, it has become critical to optimize the application computation rather than the actual data management tasks on the servers. With the legacy administrative model, storage-related tasks can lead to large negative impacts to SLAs; therefore a shift from a manual model of data administration to an array controller, automated, native-by-nature model offers operational benefits and risk reduction to the cluster. As a result, the total cost of ownership is reduced because of a reduced scale-out requirement on the compute side while maintaining or gaining processing performance prerequisites.
VAAI is an API with a set of primitives that allow offload of specific disk-storage related functions from an ESX hypervisor to the storage array controllers. The API primarily provides infrastructure-related performance enhancements that benefit the compute side of the cluster with improved scalability and resource density.
With large hypervisor-based implementations, it has become critical to optimize the application computation rather than the actual data management tasks on the servers. With the legacy administrative model, storage-related tasks can lead to large negative impacts to SLAs; therefore a shift from a manual model of data administration to an array controller, automated, native-by-nature model offers operational benefits and risk reduction to the cluster. As a result, the total cost of ownership is reduced because of a reduced scale-out requirement on the compute side while maintaining or gaining processing performance prerequisites.
A feature that applies to all of the existing storage products is ALUA -- Asynchronous Logical Unit Access. This is sometimes referred to as true active-active access for LUNs. Meaning that any host port on either controller may be used for I/O to any LUN. We have active-active controllers today but from a LUN perspective we only have active-passive. This opens up the controller for easier integration with third party fail-over drivers. Some examples of environments that will benefit from the new ALUA LUN ownership model include VMWare, Linux DMMP and Solaris MPXIO.
Even with our own failover drivers, ALUA provides significant access and failover performance. Especially in clustered configurations and in SAN boot or root boot configurations.
From a technical level we still maintain LUN ownership on a particular controller. And there is still a preference for I/O to be routed through the owning controller. But I/O to the alternate controller will now be shipped to the owning controller. There is a slight performance decrease for I/O’s to the non-owning controller. For that reason, over a 5 minute time window if a greater percentage of I/O’s are being routed to the non-owning controller the controllers will automatically “failover” the LUN to the controller receiving the greatest number of I/Os. But failover times are much improved with ALUA. And I/O can continue even during the “failover” period. There is a drop in performance during this short duration but not as severe as the legacy implementation.
Bandwith = Veri x 8bit / Zaman -------
Data migration is limited to one-way migration from another FC storage device onto a Storwize Entry controller
Now, this page is the price and product specific analysis by vendor.
Now, this page is the price and product specific analysis by vendor.
In the second column of this tab we can see the typical street price in us dollars for a basic configuration. This configuration includes dual controller, 3.6TB of capacity with SAS Disks and basic software. This information was taken from the Ideas Competitive Profiles site.
From the information of this column we can see that price variations are not significant, however there are vendors like EMC with VNXe that are offering products with competitive price but do not include some of the advanced features in their systems.
HP
What is the problem with the EVA? The EVA is an old architecture. The whole market knows this. In a recent announcement the P6000 family, depending how you count the history, is the 6th or 7th generation of this array. This is a dying family that is going into a dead-end. They have no replacement today.
You can see the strengths of the family. They have a huge install base and they have brand recognition. They have good software. They have good cluster extension with Microsoft clustering and Linux clustering but other than that they have a tremendous amount of weaknesses. The main weakness is that it is an aging install base with no future. That should do it for you but they have much smaller cache. They cannot upgrade from the previous EVA's. They do not do data tiering and are still at the sub-LUN level. They do not have data compression. They do not do a clustering or have external virtualization and essentially it's not a unified array. There are a lot of weaknesses. I think this should be an easy win for us except that they are probably going to sell it very cheaply because they know their weaknesses.
DELL
Here is the danger. Dell is doing very good marketing of their Compellent arrays. If you look at their literature, and I am quoting here in blue, 'Fluid Data Architecture'. They use this fluid image of liquid which is very easy, very flexible and all of these words are taken from their brochures – 'Agile', 'Dynamic', 'Flexible', '...adaptive to ever changing needs'. They do a very nice story and they do have a very flexible array. In other words, look at the fact that they can start very small and grow seamlessly apparently all the way to 1000 drives. And they have a blade design for their IO like EMC does so they can exchange IO. Each blade can hold 4 ports and they can have either 5 or 6 blades depending on whether it's Series 30 or Series 40 so they have a lot of flexibility between Fibre Channel, iSCSI and FCoE. They do have a good stack of software like thin provisioning. They were the first to invent sub-LUN tiering like Easy Tier for IBM. They really were the first ones to market. They called it Data Progression. That's their claim to fame. They have Local and Remote Replication and NAS Gateway. So they have a nice story to tell but what's behind this flexibility?
Here again, their strengths are they have very good marketing. They have a wide range family and can grow all the way to 960 drives with flexible connectivity and yes let's give it to them, they have very flexible IO capabilities and they do good auto-tiering at the sub-LUN level and yes they were the first ones to market with that but now everybody has it, so I don't see as much of a differentiator and they do have thin provisioning.
EMC
What is the VNX family? You may all recall that EMC was for many years selling two families: the CLARiiON for block access and Celerra for file access. They were two completely different families. CLARiiON ran an Operating System called FLARE and Celerra ran its own Operating System called DART.
What EMC has done is to unify these two families into a single family. Basically the hardware is no secret anymore. It's Intel based, x86, PCIe. It's just basic hardware. The secret sauce is in the software. They have a GUI called EMC Unisphere. This GUI is supposed to bring together the CLARiiON and Celerra code. The reality, and I've been told, is that EMC doesn't seem to have really merged the codes. EMC Unisphere seems to be more like an umbrella GUI but underneath you either launch the FLARE code when you need to do block functionality or you launch the DART OS code when you need to do file functionality. The two codes underneath are apparently running and separate; so it's really not a full integration. VNXe and VNX. The reason I'm showing you this is because EMC has an extraordinary marketing machine. They will tell the customer that this is a simple, powerful and easy to use family but the reality when you go a bit deeper is not like that.
The first thing you notice is up on top. The VNXe is NAS and iSCSI connectivity only. They do not have Fibre Channel. There is no upgrade from VNXe to VNX which does support Fiber Channel. We have different controllers so there is no upgrade.
In the first family in the VNX5100, that's Fibre Channel only while the others also support iSCSI, NAS and FCoE. The first one seems to have a different controller from the other ones so we doubt you can upgrade between the 5100 and the 5300. Maybe they can upgrade with data in place but it would be a disruptive upgrade because you would have to change controllers.
We understand that EMC has a lot of features and a lot of software options but they are bundling now and as we understand it, they are selling them by suites which makes it really easy to order but the customer may be paying for more than they wanted. In other words, if they wanted one of the features inside a suite, they have to buy the whole suite and they can even buy packs which are several suites together. The other thing to remember about EMC is that they essentially have no software warranty. I think the software warranty is something like 90 days compared to IBM which is a year.
*Basic configuration includes dual controller, 3.6 TB SAS raw and basic SW.
**Dual Clustering will be integrated to the Thunderbird Family in a future release
Source: Ideas Competitive Profiles, CompeteLine Collateral and External Webpages of each competitor