This presentation is the second of three parts. See part 1 for overall business value proposition, product positioning and Application Optimized Storage solution messaging. Part 3 delves into software and solutions for the Hitachi TagmaStore Adaptable Modular Storage and Workgroup Modular Storage.
A quick overview of our company, Hitachi Data Systems Corporation. Hitachi Data Systems Corporation is a wholly-owned subsidiary of Hitachi, Ltd., originally formed as Hitachi Data Systems Corporation in 1989 as a joint venture between Hitachi, Ltd. and EDS. Hitachi, Ltd. owned 86% of the original entity with Electronic Data Systems (EDS) owning the remaining 14%. Fast forward to April 1999, Hitachi Data Systems became a wholly-owned subsidiary of Hitachi, Ltd. and has been operating as such ever since. The key point I’d like to make here is that in North America, Europe and other key geographies, Hitachi, in various forms -- whether it is Hitachi Data Systems, its predecessor, National Advance Systems, or the predecessor of that company, Itel -- the fact is Hitachi has been selling industry-leading storage solutions and server solutions for the past three decades. This is an impressive track record that most storage companies we compete with today cannot claim. Our go-to-market strategy is comprised of direct and indirect sales in over 170 countries and regions. Hitachi Data Systems has a 3,200-strong global employee base and it is expanding. We’re essentially positioned, within Hitachi, Ltd., as the strategic focal point for all storage infrastructure solutions, storage management software, and consultative services pertaining to storage. We have also been recognized for excellence in customer service, which is very important to us as a business. We have been praised by Bank of America and SBC and we have also won a “Supplier Excellence” award from Texas Instruments. There are many other awards Hitachi Data Systems has received which are not listed in this slide, including one from a large auction company that starts with an “e”
A quick overview of our parent company, Hitachi Ltd. Hitachi, Ltd. is a public company traded on the Tokyo Stock Exchange under ticker symbol “6501” and in the U.S. on the New York Stock Exchange under the ticker symbol “HIT.” It is one of the world’s largest integrated electronics companies. Many industry watchers essentially view Hitachi, Ltd. as a unique fusion of IBM and General Electric - in that Hitachi, Ltd. encompasses the broad spectrum of IT products and solutions and semiconductor fabrication expertise that you see at a company like IBM. But, Hitachi also spans the heavy machinery, thermonuclear reactor engineering, and other heavy machinery-oriented goods that General Electric produces. Hitachi, Ltd. is a manufacturer of over 20,000 products. We believe that gives us a competitive advantage relative to storage-only vendors, in that we can leverage the IP and the research talent across many, many thousands of products, and bring a lot of that IP and research talent to bear on a central key focus or core business area -- storage. Again, the main point to emphasize here is that cross-pollination across multiple product disciplines is a key differentiator that has contributed to Hitachi’s vast product portfolio. Currently, there are about 932 subsidiaries within Hitachi with over 355,000 employees. The unique thing that is Hitachi, Ltd. is home to over 2,000 Ph.D.’s – that is to say there are more Ph.D.’s within Hitachi than there are employees at some of our competitors’ companies. So, Hitachi is very proud of the fact that we have in fact one of the largest associations and groupings of Ph.D.’s out there in the information technology and science space.
Hitachi’s fiscal year runs from April to March. Total FY07 revenue: for the fiscal year ending in March 2008, total sales were a little over $112.2 billion. Any investment made in information technology, whether it’s networking, telecommunications, enterprise servers, super computers, storage systems, other storage solutions, etc., Hitachi Data Systems utilizes cross-pollination to reap the benefits of that investment and leverages it for the development of other products. Taking a look now at the composition of Hitachi, Ltd’s business and the vertical markets it competes in. Hitachi, Ltd. has 7 distinct business segments, which comprise the over 20,000-strong product portfolio. Starting at the lower bottom on the left side, comprising about 22% of total sales for last fiscal year is the Information Systems and Telecommunications Group. This is the most strategic business segment for Hitachi, and many times, the most profitable as well. This comprises storage systems, storage consulting services, super computers, telecommunications equipment, gigabit Ethernet routers, SONET switches, enterprise blade servers, which are now being sold in North America, Korea, as well as Japan and other geographies. Basically, all information systems in telecommunications, IT and networking all unified in one group spanning servers, networking and storage. Powerful unification amongst these three facilitates great cross-pollination efforts. Power Industrial Systems, last fiscal year comprising 28% of total revenues—a very profitable business segment for Hitachi, Ltd. This comprises everything ranging from Shinkansen Bullet Trains (the trains in Tokyo and other regions of the world that can go in excess of 150 to 160 miles-per-hour), thermonuclear fusion reactors; heavy earth-moving equipment; various turbines that are being made in conjunction with General Electric; and so forth. If your customer is interested in earth moving equipment, Hitachi produces bulldozers and cranes and other earth-moving equipments. (Note Caterpillar competes with Hitachi). There is also the financial services business segment comprised of various capital and leasing corporations, within Hitachi Ltd., which constitutes about 3% of overall total sales. The Electronic Devices segment covers primarily semiconductor manufacturing equipment, digital media and consumer products and contributed to 10% of overall revenues for FY07. If you look around here within our Executive Briefing Center, all of the projectors, the plasma screens, the LCD screens - basically everything that will comprise the home theater experience is produced by Hitachi, Ltd. Anything you can imagine from DVD players to plasma screens to LCD screens, stereo equipment, you name it, high definition video equipment - all digital media products are produced by Hitachi, Ltd. Another important point to make is the Electronic Devices division of Hitachi, Ltd. has its own semiconductor fabrication operation which provides a distinct advantage over competitors. While many competitors rely upon third-parties for semiconductor chip manufacturing, we have our own fabrication plants, which gives us a powerful story from a vertical integration perspective. Logistic services and other segments round out the portfolio. High Functional Materials & Components is a rather interesting group with tremendous industry expertise not many people are aware of. For one, Hitachi, Ltd. is a key supplier to automotive companies such as Honda, Toyota, Mazda, and General Motors. Case in point, Toyota recently turned to Hitachi, Ltd. for hybrid motors for its Lexus RX 400H hybrid. The turbo chargers in the Mazda Miata; the hoses and rubber materials in many of the Nissan cars leverage manufacturing innovations from Hitachi, Ltd. Another example, Hitachi, Ltd. owns a subsidiary called the Xanavi (Spelled x-a-n-a-v-i) which is a leading provider of navigation systems for automobiles. In fact, if you go to your local Infinity or Nissan dealer, all the navigation systems in those vehicles are from Xanavi, owned by Hitachi.
Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
These are the IT challenges that customers almost always mention. Today’s presentation will demonstrate how HDS is addressing each one, with our suite of SRM software products and services.
Developer: No Changes are required to this slide. Presenter: This slide is the first of three that introduce your audience to S.O.S.S. Services Oriented Storage applies service-oriented architecture (SOA) concepts to storage to deliver a storage platform that can be readily reconfigured and optimized to changing business requirements; our solutions deliver a process-oriented service approach to storage rather than the traditional piecemeal, task-oriented approach, which leads to needless redundancies, over-subscription of storage, management complexity, and possible compliance exposure. Let’s talk briefly about some definitions: Simply stated, Service Oriented Architecture (SOA) is a business-centric IT architectural approach that supports integrating your business as linked, repeatable business tasks, or services. So, a service-oriented architecture is essentially a collection of services that can be shared and can communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Practically speaking, instead of running a bunch of discrete applications that are expensive, complex, and difficult to manage, the goal of SOA is a flexible IT infrastructure enabled by a common set of services that can be leveraged across all applications. The result for IT is greater flexibility and efficiency with reduced cost and complexity. Why is this Important? As we all know, connecting IT with business has been the mantra of IT organizations for many years. However, the reality often finds the data center mired in redundancies based upon the proliferation of monolithic storage architectures and infrastructure resulting in limited IT flexibility to adapt to business requirements while also incurring increased cost, complexity, and risk. Progressive IT organizations are adopting a services-oriented approach to managing core IT functions. Services are increasingly defined in user’s terminology and the IT infrastructure needed to support those services is mapped and managed to service level agreements (SLAs). The ability to do this in a cost-effective manner is the trick. In the past, storage systems lagged behind servers and networks whose management tools have adapted to these needs. Hitachi Data Systems has changed all of that. How is Hitachi Data Systems Different? Hitachi Data Systems has been developing its storage strategy with a services-oriented approach for many years. Some of the unique hallmarks of the Hitachi Data Systems strategy include: Control unit virtualization with enhanced storage services that enable heterogeneous (HDS and other vendors) storage systems to interact and work in concert to optimize storage performance, data protection, and system availability An integrated portfolio of storage management, tiered storage, business continuity, and data migration and mobility services that enable organizations to leverage a single set of tools for all of their storage and data management challenges. Most recently, the addition of file (HNAS) and object (HCAP) services, enabling organizations to leverage a single platform for ALL their data storage requirements. How do customers benefit? Most importantly, organizations moving to a services approach to storage can now respond more quickly to business and technology change and: Reduce cost and increase efficiency by reducing the complexity of their infrastructure and automating the process of storage management Boost utilization and reduce over-subscription of storage resources Cost-effectively address a growing array of structured and unstructured data types and applications Improve availability, reliability, and SLA consistency for midrange and small enterprise data applications Provide metrics and enable policies to measure and automate the use of storage services This is what we call Services Oriented Storage. We’ll talk a lot more about our services-oriented approach to storage throughout this presentation.
Key Objective: is to introduce the customer to Hitachi’s ‘One Platform All Data’ strategy and articulate the unique value of our strategy and how it is superior to the competition. The Storage Command Suite is the heart of SOSS. It’s enables the storage capabilities – mapping applications to the physical storage: Provisioning storage systems Virtualization services that provide flexibility such as transparent, non-disruptive data migration Reporting and forecasting to improve planning and minimize service disruptions Performance monitoring to insure meeting SLAs and optimize utilization Key Points: 1.) Hitachi Data System’s focus is on storage. Our goal is to help customer’s closely align their storage infrastructure with their business requirements by delivering storage solutions that reduce complexity, cost, and risk, as well as TCO, while increasing IT efficiency. 2.) Our strategy is to deliver ‘One Platform For All Data’. To understand why this is this important let’s look at the customer challenge: - The amount of digital data being created and stored continues to grow unabated. - Regulatory and compliance requirements are driving organizations to store more data for longer periods of time. Furthermore, they need to be able search for specific data if they’re ever asked to. - The explosive growth in semi-structured (e.g. email) and unstructured (files) is forcing customers to look for new ways to deal with files, metadata, and content. - Every application has different storage requirements for performance, availability, retention, etc. - Vendors traditionally throw discrete solutions at each of these different problems. - It’s still cheaper for organizations to buy storage then to manage it so customers typically throw more storage at the problem. - IT budgets remain relatively flat. Because the traditional response to these challenges has been to throw more storage at the problem customers end up managing multiple silos for their different application requirements. This is complex, costly, and inefficient. Hitachi address these challenges with a unique ‘One Platform For All Data’ strategy comprised of an integrated family of: 1.) Storage arrays for applications from mission-critical OLTP to long-term archiving. 2.) Intelligent storage controllers to virtualize and simplify heterogeneous storage environments. 3.) Storage management solutions to manage all your storage infrastructure. 4.) Tiered storage and data mobility solutions to simplify your infrastructure and reduce cost by aligning your data with the right tiers of storage. 5.) Business continuity solutions to support all backup, local and remote replication requirements. 6.) Archiving solutions provide enterprise class archiving and search across all applications. 7.) NAS solutions for high-performance applications, SAN/NAS consolidation, and common file/print services. With Hitachi’s strategy, all of these capabilities work in unison enabling customers to leverage ‘One Platform For All Data’. The benefits can be immense. Once the customer gets the general idea of our platform strategy, the next key is to understand the customers key pain points and how they measure success. Do they want to save money, reduce risk, meet a compliance requirement, insure availability of missio-critical applications, etc.? If you understand that you can translate it into what we can deliver. Bottom Line: Hitachi has a very unique strategy enabling customers to leverage a single platform for all their storage requirements. This is very different than what our competitors, in this particular case Sun, can offer. Customers should walk away from this part of the discussion a clear understanding of our platform strategy and how it can benefit them. For further education, here are some additional facts about data: 20% Structured Data (databases, transactional, data warehouses) 80% Unstructured (objects and files) and Semi-structured (e-mail) Data - <5% of unstructured data is managed through content management….and shrinking - Unstructured Data is growing at 10X the rate of Structured Data (Files, Email, Content) - 2,272 PB of Unstructured Data Today, 20,000PB in 2010…Most is dormant after 90 days. ESG. Value of the File….Content Is King - File Attributes help basic classification - Content Attributes (Metadata) enables extra classification, extra descriptions - Content inside the file enables text searching…informational value
Key Objective: is to illustrate to the customer how SOSS is built on an integrated platform of services and why that is important to them. Key points: 1.) SOSS provides a single platform for all block, file, and object services. The eliminates the traditional silo approach to storage we highlighted earlier in the presentation. 2.) Using SOSS customers can align their storage with application requirements based upon metrics including QoS, SLA, I/O, RTO, etc. Some of these metrics are highlighted in the Sample Metrics portion of the graphic. 3.) Professional services are a key part of SOSS. Hitachi offers services for consulting, design, implementation, and health checks. Some of our business-centric consulting services are highlighted in the Storage Practices portion of the graphic. Presenter Commentary: As we have described throughout this paper, the Services Oriented Storage Solutions platform is a business-centric concept enabling organizations to closely align their storage infrastructure with their business requirements. While many storage vendors may claim to have business-centric strategies only Hitachi can deliver because Service Oriented Storage Solutions are built upon a dynamic, flexible platform of integrated storage services enabling customers to optimize storage infrastructure while reducing cost and complexity. The platform is both powerful and simple: The architecture summary illustrates that the Services Oriented Storage Solutions are comprised of an integrated stack of services including: Block Services – which include volume virtualization, discovery, provisioning, partitioning, volume management, replication, migration, security, and metering File Services – which include file virtualization, replication, migration, security, encryption, and archiving Object Services – which include content services including index, search, classification, and security These services used individually or collectively deliver Services Oriented Storage Solutions to meet the necessary application storage requirements based upon metrics (listed under sample metrics) including I/O, service-level-agreements, Quality-of-Service, Recovery Time and Recovery Point Objectives (RTO and RPO), and retention. Most importantly, the unique value of Services Oriented Storage Solutions is the ability to leverage all of these services on a single, integrated storage platform, managed by a common management interface. Finally, SOSS also incorporates professional services for consulting, design, implementation, and health checks. The storage practices column highlights some of our key business-centric consulting services. Bottom Line: SOSS is unique in the industry. If customer’s want to break the logjam of complexity, cost, and inefficiency they should go with SOSS.
The HDS Platform -- enabled by the Services Oriented Architecture (NOTE: This slide is used in conjunction with the next slide – notes are for both). As we move forward in our environment, the platform will be enabled by a Services Oriented Architecture. We look at things from four components: Data Producers Data Consumers Data Storage and Data Protection As we look through this, Data Produces are applications that users interact with like SAP, Oracle and Exchange, or applications that the systems interacts with like NetBackup and TSM. These are applications that produce data. Now that data may be produced in a NAS format and be consumed in a Data storage environment via a NAS interface via a Virtual tape interface or a Content Archival interface or a LUN interface. You may store that data on a Modular environment. You may store that data on an Enterprise or on a Virtualized environment. It does not matter. You need to protect that data. At the Data Protection level and management through a consistent interface. [ note: Pop ups will display ] We can get this through a Virtual Tape Product, through a Content Archival, our HCAP product. [ note: Pop ups will display as we move into the Storage environment, the Modular product , the USP and ISP V will appear.]
Please see notes from SLIDE above – this is a BUILD
09/07/12 Our product line now consists to two families which have a common integrated management suite
This presentation is the second of three parts. See part 1 for overall business value proposition, product positioning and Application Optimized Storage solution messaging. Part 3 delves into software and solutions for the Hitachi TagmaStore Adaptable Modular Storage and Workgroup Modular Storage.
09/07/12 Our product line now consists to two families which have a common integrated management suite
Should you ever outgrow your Hitachi Simple Modular Storage 100, or you need very high performance and fibre channel connectivity, you can easily migrate to Hitachi’s Workgroup or Adaptable Modular Storage family scaling to over 300TB and enough performance for any modular workload!
Hitachi Simple Modular Storage will be available beginning in October 2007 from Hitachi Data Systems and our many reseller partners throughout the world.
Hitachi Simple Modular Storage will be available beginning in October 2007 from Hitachi Data Systems and our many reseller partners throughout the world.
The AMS500 has two independent back-end 2-Gbit paths from each controller. There are two connections (two pairs of IN-OUT connections per controller). As there are two active back-end paths per controller, all disks can be seen by just one controller in the event of a failure of the alternate controller. Both FC and SATA enclosures may be installed on this system.
Presenter: Use this slide to briefly introduce the product. This slide should allow you to introduce the strategic nature of this product, introduce how it may be part of a larger family of products without creating confusion, and describe the basic functionality this product provides. More detail on what the product does, and how it does it, is provided on the subsequent slides. Our advanced midrange systems offer industry leading features including: Cache partitioning and modular volume migration allowing storage administrators to quickly adapt the storage to meet changing application requirements. They also offer energy efficient storage with a “power savings” feature that spins down and turns off storage when not required. Like their enterprise counterparts, the Hitachi’s mid-range Adaptable and Workgroup Modular Storage families support all major OS and files systems and come with fibre channel, iSCSI and NAS attached options. The AMS1000 offers dual protocol support.
Key value: 2 parity drives allow a customer to lose up to 2 HDDs in a RAID group without losing data. RAID groups configured for RAID-6 are many thousand times less likely to lose data in the event of a failure. RAID-6 performs nearly as well as RAID-5 (for similar usable capacity). RAID 6 also gives the customer options as to when to rebuild the RAID group. When an HDD is damaged, the RAID group must be rebuilt immediately (since a second failure may result in lost data). During a rebuild, applications using the volumes on the damaged RAID group can expect severely diminished performance. A customer using RAID-6 may elect to wait to rebuild until a more opportune time (night or weekend) when applications won’t require stringent performance. HDD roaming allows the spare to become a part of the RAID group, no copy back is required saving rebuild time.
Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
Multi-protocol support provides the AMS customers with the flexibility of using their storage for either Fibre Channel or iSCSI SANs or both. Customers can use this capability to connect the same storage array to high performance fibre channel based servers as well as lower cost iSCSI based servers. Customers also have the flexibility to migrate their storage from iSCSI to Fibre Channel SANs. This flexibility provides excellent investment protection and is not available on many competitive modular storage systems.
With the introduction of the iSCSI interface for the WMS100, AMS200, and AMS500 s systems, Hitachi Data Systems has further advanced the ability of its customers and Channel Partners to deploy storage that is optimized to their applications. The AMS1000 takes this one step beyond other vendors by offering the ability for customers to choose multiple interfaces while still having only one scaleable array to manage.
The WMS100, AMS200 and AMS500 systems can provide iSCSI and Fibre Channel multi-protocol support with an optional bridge connected to a fibre channel controller on the storage array. This option allows a single storage array to store data for heterogeneous SANs.
AMS systems also have a “power savings” feature which allows volumes to be powered off when there is no IO. This feature is ideal for applications with scheduled but infrequent access such as backup volumes, archive or even unallocated storage. This feature saves on electric utility costs as well as data center cooling costs. Unlike dedicated “MAID” (massive array of idle disks) which limit the number of drives which can be spinning at any one point in time, Hitachi allows the volumes to be spun up at the customers discretion. There is no limitation as to how many volumes must be off at one time. No other vendor offers this flexibility.
Hitachi TagmaStore™ Adaptable Modular Storage and Workgroup Modular Storage are the new product names for Hitachi’s midrange offerings. The TagmaStore brand name now refers to all Hitachi storage products. The Hitachi TagmaStore™ Universal Storage Platform family replaced the Hitachi Lightning 9900™ V Series enterprise storage systems in October 2004. Now the Adaptable Modular Storage line enhances the Hitachi Thunder 9500™ V Series modular storage systems, which will remain available well into 2006. The Adaptable Modular Storage and Workgroup Modular Storage models offer many unique features that the Thunder 9500 V Series does not. However, the Hitachi Thunder 9585V™ ultra high-end modular storage still offers very high performance and capacity and will continue to appeal to the market. The Hitachi TagmaStore™ Network Storage Controller model USP V/VM is a rack-mounted Universal Storage Platform device. We have priced and positioned the product for the high end of the midrange market, above the Thunder 9585V system but below the model USP100 in terms of scalability, performance, and price. The Workgroup Modular Storage line continues Hitachi Data Systems’ branding of the Workgroup Modular Storage descriptor for SMB products. This presentation touches briefly on all Hitachi Data Systems offerings and then covers the Adaptable Modular Storage and Workgroup Modular Storage products in greater detail. More information for the USP V/VM and Thunder 9585V system may be found in those product presentations. Note that the USP V/VM is under NDA. This presentation is ONLY for Hitachi Data Systems employees and authorized resellers who have signed the NDA form and for current and prospective customers under NDA. All information is subject to change.
The Hitachi approach, by virtue of our ability to separate the controller from the backend media is that customers can take their people, their processors, their resources and their existing storage and continue to utilize it, because our storage controllers, our intelligent virtual mega controllers, can assimilate non disruptively into existing IT environments. They can complement customers’ environments. Hitachi’s not asking clients to rip and replace. Customers can reinvigorate existing assets, obtain the functionality on Hitachi’s controllers and enhance their existing investments. No other vendor can provide this level of business enabling functionality, of storage functionality to reinvigorate and improve the performance of and extend the life of existing assets. We’re talking about non-disruptive assimilation, where Hitachi’s managed to enter large accounts that were previously the domain of our arch competitors, because we’ve enabled these clients to put a USP or an NSC in front of their existing storage and complement it, give it new functionality and provide a single replication engine and a single management interface across all of their storage assets. It’s essentially a management, storage management solution, that complements their assets. With Hitachi, you can achieve and attain this functionality in a non-disruptive fashion. That’s our approach. The competition, on the other hand, your people, your processes, your resources, throw it all away. It’s all rip and replace. Forget about your prior investments. Any new functionality to the competition may or may not have added into their controllers is interlocked with these drive or disk array frames to the left and right of the central controller. If you want that functionality, if they have even put it in the new controller, you have to buy the entire thing. Whereas, with Hitachi, if you look at the new USPV, you can simply buy the controller and apply all that functionality to your JBOD, apply all that functionality to your existing storage capacity, to your existing BMX’s, your Clarion’s, your IBM DS’s, your DS-60 800’s [?], your LSI’s, white box storage, whatever you may have.
Introducing a new dimension for storage virtualization, a 247-petabyte address space. If you look at the industry, if you look at all these high-end, monolithic, aging storage systems, they just keep getting bigger and bigger, and high-end vendors just tend to keep stuffing more and more drives interlocked with these controllers and not thinking about the management issues that brings about. If you took a high-end storage system, they all have commodity media into the array cabinets directly to the left and the right of the controller. In the middle is an intelligent controller. That is where the majority of the R&D investments go, where the majority of the R&D investments go, where all the storage vendors make a majority of the R&D investments go from the storage vendors. A majority of our R&D investments go into that intelligent controller. That is where we have the new software, the new microprocessors, the new architectural innovations, the new services, the new intellectual property. All of the research and development dollars go into embedding more and more intelligence, an increasing amount of intelligence into that intelligent, virtual storage controller, if you will, into that controller. All the vendors spend big R&D budgets to try and innovate and put more and more functionality into their controller. However, Hitachi is the only company that has completely separated that controller from the backend disk media, giving customers the flexibility to invest in the most valuable part of the storage system only, the controller, the intelligence, thereby enabling them to get the latest functionality and apply it to their existing storage capacity without forcing them to buy more and more capacity and larger and larger storage arrays. The disaggregation of storage is key to the success of our industry going forward, and this is the direction that Hitachi has been heading on and has further expanded it with this product, which enables us to apply all the key functionality that resides in our controller to externally attach storage devices and, now, across a 247-petabyte address space. Additionally, this enables Hitachi to compete and sell the customers the business value of this intelligent storage controller, which is a storage management solution now. It is not a box. A box is something that exists in and of itself and is an isolated piece of equipment that has an isolated management cockpit, an isolated management console, if you will. This goes beyond the box. This goes beyond the confines of a box to provide common storage services to externally attach storage devices. You’re going beyond the confines of a box. It’s not a box. It’s not a system. It’s a platform, a true storage services platform. Again, we’re going beyond virtualization.
09/07/12 Our product line now consists to two families which have a common integrated management suite
With this announcement, Hitachi is changing the industry … again. We are delivering the industry’s first Universal Storage Platform — a custom-designed tight integration of hardware and software. The Universal Storage Platform is a new industry category featuring breakthrough technologies not available in any other storage systems today. The Universal Storage Platform will enable a new paradigm for managing and deploying the storage infrastructure. The Universal Storage Platform includes an embedded virtualization layer capable of managing up to 32 petabytes of internal and external storage, with up to 332TB of internal storage. This breakthrough solution can logically partition the physical storage cache, capacity, and ports and attached storage into secure, independently managed virtual private storage machines. It brings a new combination of technologies, such as disk-based journaling and “pull” copying, that support storage-agnostic data replication. All of this is impossible without a hardware platform powerful enough and reliable enough to drive the software functionality. The Universal Storage Platform delivers with the third-generation Hitachi crossbar switch architecture – pushing 2 million IOPS, 68GB/sec cached bandwidth, and 256 concurrent memory operations – all at least 5 times more than other storage systems available today. All combine to deliver an unparalleled value proposition by reducing TCO as much as 40% over three years. Let’s take a closer look at each of these valuable innovations.
With this announcement, Hitachi is changing the industry … again. We are delivering the industry’s first Universal Storage Platform VM — a custom-designed tight integration of hardware and software. The Universal Storage Platform VM is designed to bring high-end Enterprise class virtualization features and reliability to the Small Enterprise and growing mid-market customers. The Universal Storage Platform VM includes an embedded virtualization layer capable of managing up to 96 petabytes of internal and external storage, with up to 72TB of internal storage. This USP VM can logically partition the physical storage cache, capacity, and ports and attached storage into secure, independently managed virtual private storage machines.
This slide should help customers understand technically how the connection of externally attached storage is achieved. The external storage looks as if it were part of the Network Storage Controller platform, with no distinction. The user will be able to see where the volume is physically created and can manage it accordingly (assign the volume to an application, use it as secondary SI volume, etc.), across heterogeneous storage platforms from the same device-management screen.
Hitachi ShadowImage™ In-System Replication software can also be used to mirror data volumes. [CLICK] For example, you might use it to mirror a copy as a hot backup on an internal array, so that in the event of a failure you could swap over to that system. And, as is true with data on the Hitachi Lightning 9900 V Series systems, ShadowImage can maintain as many as nine additional copies of a volume of data. On the Thunder 9500V Series systems, for example, ShadowImage software can create only one mirror of a volume. Using the Universal Virtualization Layer, you can use the enterprise-class version of ShadowImage on other storage systems as well. [CLICK] Now you can mirror as many as nine copies of a volume on any storage system.
Using the virtualization capabilities, in addition to a mirror for hot backup on an internal system, you can at the same time mirror another copy off to an external storage system that might be used for offline backup, and move a third copy perhaps to yet another system that might be used for development or testing. All with one replication product.
This slide shows the partitioning capabilities of the USP. We’ve created 3 logical partitions (yellow, green, and gray), assigning a few volumes to each partition. In this case, each partition has some internal volume (orange) and a mix of volumes representing different storage tiers. Also, note that we’ve partitioned cache as well. In this case, we’ve split it in thirds, more or less. (It doesn’t need to be that way… we’ll change that shortly.) We’ve assigned ports for each partition too.
Once created, Private Virtual Storage Machines allow the storage administrator to reallocate storage resources as needed: Click: Allocate additional storage to Partition #1 (1 internal USP volume) Then allocate more storage to Partition #3 (two volumes, from different tiers) Increase the cache for Partition #2 (at the expense of both Partitions 1 & 3, in this case)
Hitachi Volume Migration software now becomes an extremely powerful tool for lifecycle management and optimizing applications. After the end of the fiscal year, for example, you might take some of the accounting data from the prior year, which is not going to be accessed quite as frequently, and move it off to an external storage system—say a Hitachi Thunder 9500™ V Series system with SATA drives. Similarly, if you had an application that perhaps was a key project and your CEO called the CIO to complain about its performance, you could use Volume Migration software to quickly and easily move that data from an external storage system onto the highest-performing storage on internal volumes.
Its about reducing CAPEX and OPEX. Its about aligning applications to the right storage tier while improving operational efficiencies. Storage tiers can be designed around many characteristics including availability, performance, costs, protection. Tiering around: Availability: Raid types, controller architecture etc.. Performance: 15k fast storage (internal) that is typically assigned to the most demanding and important transactional applications. Probably only one RAID type used here. Also probably remote replication used. 10k 300 Gigabytes internal for daily business/Web applications. Possibly multiple RAID types deployed (could be considered ‘sub-tiers’). Depending on the critical business nature, remote replication and/or ShadowImage software. 10k FC external storage for daily or less than 24hour-day applications that are not as demanding. Slower external storage (as low as SATA caliber) for saved snapshots, read-only historical data, data warehousing, etc Cost: Use of HDFC drives, SATA Protection: Tiering around protection to include VTL and Active Archive Solutions.
This is the TS Maturity Model which you should be well familiar with (details below). It maps to the products in the following way> Virtualization (UVM), Data Mobility (HTSM). Automation is a services integration solution at this stage. Hitachi has developed a comprehensive maturity model to help customers realize the vision of tiered storage through a stepped approach. Level 0: Heterogeneous Storage Environment Most customers today have a heterogeneous strorage environment. It is characterised by multiple storage arrays from different vendors, multiple management interfaces, under utilized storage capacities, VTL, Archive and NAS. This disparate storage strategy results in underutilization of storage assets with a very high storage management costs. The final symptom of this level is that both CAPEX and OPEX are out of control. Level 1: Virtualization Virtualizing heterogeneous storage assets behind a USP or NSC simplifies the storage infrastructure thus enabling improvements in storage utilization. It provides a common platform for storage management, business continuity and other Storage Services like NAS, Content Management, and Virtual Tape. Virtualization also enables customers to align storage tiers with business needs. As all data does not have the same business value, treating them equally is an expensive value proposition. Virtualization enables customers to create storage tiers with different provisioning and management processes and aligning the right data to the appropriate storage tier based upon business value. This dramatically reduces capital expenditure and operational costs. HDS Customers Examples : Alberta Justice, Fidelity National, University of Utah Level 2: Data Mobility Customers who have realized the benefits of virtualization (Level 1) can further improve IT efficiencies by incorporating data mobility tools in their virtualized storage environment. Typically, data migrations are time consuming, needs application downtime and prone to failure. With data mobility tools like Hitachi Tiered Storage Manager technology refreshes can become seamless. Data on assets reaching the end of their lease or life cycle can easily be migrated from one platform to the other. Changing application and data life cycle needs also require ongoing alignment of storage tiers with business needs. A good example would be of a payroll application which requires more computing resources only during certain days of the month. Data Mobility tools from Hitachi Data Systems make migration across storage tiers seamless. Seamless data migration during technology refresh and data life cycle management reduces risk, reduced operational cost, application uptime. At maturity level 2, we recommend customers integrate VTL and archive as storage tiers behind the USP or NSC. HDS Customers Examples : HDFC Bank, HUK Coburg Level 3: Policy Based Automation The next level of the Tiered Storage Maturity Model automates aligning of storage tiers to business needs. Most end customers or business being served demand SLAs at application level e.g. 10ms response time on Oracle application 95% of the time. Also based upon life cycle of the application demands of performance and availability could change. Policy based automation dynamically moves data across storage tiers based upon pre-set policies. Eg If the Oracle application requires a higher level of performance the policy engine will automatically migrate the data to a higher tier to ensure meeting of the SLAs. Customers adoptiong this level of automation benefit from optimized performance, reduced infrastructure and management costs, and assured SLAs. HDS Customers Examples : EDB Level 4: Content Aware Automation This is the highest level of automation where, based upon meta-data, the application is automatically provisioned and tiered to meet SLAs. This is totally self healing on an intelligent tiered storage platform. This is the next step for complete realization of our Tiered Storage vision. Common Management HDS offers a single common integrated platform across all levels with common management for structured and unstructured data. Common management includes monitoring, measurement and security.
At a CAGR of 50% data grows 337% over 3 years In order to address these challenges of high growth data in a fixed budget world, it becomes critical to determine information value and match value to the right infrastructure cost structure. Basic Tiered Solutions often create islands of disparate inefficient storage with limited service capability. Explain the model Reactive – sever internalized or direct attached (highly decentralized, expensive, ) Tiered Storage Islands – limited consolidation, limited service levels, decentralize management , limited leverage (util, aligning a box to a tier) Virtualized Storage Tiers – unify the disparate islands in one highly leveraged pool, truly consolidate increasing utilization, apply SLs with out barriers giving true standard classes of service, uniform manageability allowing consistent processes, can manage seamlessly across the enterprise, ( SOS – Seamless and policy based tier mobility, dynamic policy based service adjustments What level of maturity does the client have now?
Hitachi Data Systems’ remote replication software provides a similar common tool for disaster recovery–from any storage tier to any storage tier. Both Universal Replicator software , our new asynchronous replication software , and Hitachi TrueCopy™ Remote Replication software, our time-tested synchronous and asynchronous replication, provide reliable solutions for replication. [CLICK – show internal-to-internal replication] Replicate between volumes on two USP V/VM systems — using TrueCopy software synchronously for immediate failover with guaranteed data integrity, or Universal Replicator software for remote replication over any distance with guaranteed data integrity. [CLICK – external-to-external replication] Both Universal Replicator software and TrueCopy software will allow replication to/from any internal storage volume or to/from any external storage system, providing full heterogeneous remote replication between virtually any two storage systems.
CYQ407 – same results (42% marketshare for Hitachi) ====== This is a rather interesting chart because it details what has transpired in the high-end storage market over the last seven years. In Q1 of calendar 2000, EMC with its Symmetrix product had approximately 75% market share, essentially owning the entire high-end storage market. Hitachi, Ltd. had approximately 16%, and IBM had 10%. Now let’s take a look at what happened. In June 2000, by virtue of its unique research and development capabilities, Hitachi, Ltd. introduced its massively parallel crossbar switch architecture in the form of its Lightening 9900 series. Take a look at what happens with Hitachi Ltd’s market share from June 2000. Q2 2000, Hitachi moved up. Hitachi, Ltd. has been essentially gaining share ever since. The company introduced in May 2002 its Lightening 9900V series, and market share continued to climb. And in September 2004 it introduced its Universal Storage Platform, and its market share climbed to record levels at that time. The key point here is that as a result of its unique and industry-leading R&D capabilities, Hitachi, Ltd. was able to introduce a storage system that effectively enabled customers to do more with less and broke the propriety business model of one of our key competitors--who in the early 2000’s was requesting customers to put a maximum of one terabyte per high-end Symmetrix subsystem, each with its own SRDS software license. Hitachi Ltd. was able to break the bottleneck of that shared bus architecture. You can scale to 25-plus terabytes per individual subsystem and license your software with the amount of terabytes under management. So you’re looking at a 25-plus-to-1 ratio that effectively effects the scalability performance characteristics. Our virtualization business continuity solutions significantly enables us to, as the slide indicates, slice EMC’s market share in half. Hitachi doubled its size, and even IBM managed to grow. In summary, Hitachi, according to the latest financial analyst rankings, is essentially tied with EMC for high-end market share. There’s fluctuations each quarter—what might be a strong quarter for EMC might be the start of our fiscal year (April for Hitachi). But the point is not so much who is precisely X% higher than the other at this juncture, the point is that EMC went from 75% of the market to the mid 30% while Hitachi’s gone from 15% to the 40%’s. IBM’s made progress as well. So this just shows how innovation can have an impact on a very large, influential and profitable market space.
So, now transition to talking about the Suite. The Storage Command Suite provides capabilities across the entire HDS storage line. With 6.0 that also includes the new SMS100 too. (though only with Device Manager). Most competitors (e.g. EMC) provide different tools on different platforms
Hitachi TagmaStore™ Adaptable Modular Storage and Workgroup Modular Storage are the new product names for Hitachi’s midrange offerings. The TagmaStore brand name now refers to all Hitachi storage products. The Hitachi TagmaStore™ Universal Storage Platform family replaced the Hitachi Lightning 9900™ V Series enterprise storage systems in October 2004. Now the Adaptable Modular Storage line enhances the Hitachi Thunder 9500™ V Series modular storage systems, which will remain available well into 2006. The Adaptable Modular Storage and Workgroup Modular Storage models offer many unique features that the Thunder 9500 V Series does not. However, the Hitachi Thunder 9585V™ ultra high-end modular storage still offers very high performance and capacity and will continue to appeal to the market. The Hitachi TagmaStore™ Network Storage Controller model USP V/VM is a rack-mounted Universal Storage Platform device. We have priced and positioned the product for the high end of the midrange market, above the Thunder 9585V system but below the model USP100 in terms of scalability, performance, and price. The Workgroup Modular Storage line continues Hitachi Data Systems’ branding of the Workgroup Modular Storage descriptor for SMB products. This presentation touches briefly on all Hitachi Data Systems offerings and then covers the Adaptable Modular Storage and Workgroup Modular Storage products in greater detail. More information for the USP V/VM and Thunder 9585V system may be found in those product presentations. Note that the USP V/VM is under NDA. This presentation is ONLY for Hitachi Data Systems employees and authorized resellers who have signed the NDA form and for current and prospective customers under NDA. All information is subject to change.
09/07/12
Let’s review the key benefits of NAS. First of all, NAS is optimized for file sharing so customers can use one NAS to displace multiple file servers. This eliminates file server proliferation, and reduces capital expenditure. NAS also offers high performance and support across multiple file sharing protocol, be it Windows, Unix or Linux. By consolidating multiple file servers into one NAS, customers can reduce management cost and improve operational efficiency. Customers have fewer servers to manage, fewer SW license to buy. They will also complete their file sharing and backup tasks faster. NAS is easy. It’s easy to install and manage for file applications. It leverages existing IP network and the ease of management will lower OPEX NAS is also a convenient way to backup data to meet compliance requirement. This is especially critical for remote and branch offices.
On March 4, 2008, we announced GA of the Hitachi Essential NAS Platform which replaces the NAS Blade for USP V Family and AMS/WMS NAS Option. We also announced the next generation of High-performance NAS – the 3000 Series. GA is in calendar Q2 2008. 3100 and 3200 will replace the 2100 and 2200 respectively. The 2000 models and the 2000 Nearline models will remain unchanged in the portfolio.
Presenter: Use this slide with the following 4, to briefly introduce the product. This slide should allow you to introduce the strategic nature of this product, introduce how it may be part of a larger family of products without creating confusion, and describe the basic functionality this product provides. More detail on what the product does, and how it does it, is provided on the subsequent slides. The Hitachi Essential NAS Platform is an easy-to-use NAS solution that is an ideal solution for medium-sized businesses, remote or branch offices and enterprises needing file serving, backup or file server consolidation solutions. The Hitachi Essential NAS Platform replaces the NAS Blade for the Universal Storage Platform family and Adaptable Modular Storage/Workgroup Modular Storage with NAS Option. The Hitachi Essential NAS Platform complements the Hitachi High-performance NAS Platform, powered by BlueArc®. This NAS solution consolidates and manages up to 512 terabytes (TB) of data in a two-node cluster with access to data for Common Internet File System (CIFS) and Network File System (NFS). The Essential NAS Platform delivers best-in-class availability and scalability at a low price. This NAS solution provides complete cost effective data protection with superior Hitachi hardware-based RAID technology and various data protection software such as TrueCopy, HUR and SyncImage.
The Hitachi Essential NAS Platform family is comprised of three models. Field upgrade is available allowing an easy upgrade path from the entry models to higher end models. Upgrade from 1100c 1300c 1500c Offline Upgrade of Memory License Upgrade to desired model Optional second Power Supply can be installed in the field Depending on the model an optional dual ported 1/2/4Gbps autosensing HBA can be installed. Second HBA is required for NDMP over SAN backup. Depending on the model additional network card options can be added to each system which can be either 8 x copper and/or 8 x optical offering up to additional 16 ports
The key features of Essential NAS Platform include: Easy to use management interface which is designed for NAS management with the feedback from customers. It has the same design as HiCommand and it’s fully integrated with Device Manager, Tiered Storage Manager and Tuning Manager. We offer two options – one for advanced users, the other for inexperience users. Best in class scalability and availability Advanced data protection capabilities
In November 2007, Hitachi introduced the Hitachi High-performance NAS Platform 2000.. This platform is designed for medium-sized businesses, just like the Essential NAS Platform. When to position these two products? The key difference between the two NAS platforms is that the Essential NAS Platform does NOT support some of the high-performance NAS Platform advanced enterprise-class features.
In November 2007, Hitachi introduced the Hitachi High-performance NAS Platform 2000.. This platform is designed for medium-sized businesses, just like the Essential NAS Platform. When to position these two products? The key difference between the two NAS platforms is that the Essential NAS Platform does NOT support some of the high-performance NAS Platform advanced enterprise-class features.
Hitachi High-performance NAS Platform offers the highest performance, highest scalability and most advanced virtualization framework today. These compelling capabilities make it the ideal solution for Consolidation and High-performance applications.
Data protection has been an IT challenge for decades. Some analysts estimate that backup accounts for over 50% of IT’s time. Data protection technologies are increasingly perceived as being slow, costly, labor-intensive, and unreliable. As a result, many enterprises are enhancing their tape backup strategies with new disk-based options. By tapping the capabilities of disk, such as concurrent read/write and random access, enterprises can complement their tape backup strategies to achieve faster and more reliable backup and recovery while continuing to use tape for what it does best such as off-site storage and long-term archiving.
Here you see a media server with a Fibre Channel connection to an optional switch to the Linux server I mentioned previously to a FC connected disk array. PAUSE When we developed VTFO we designed into it the ability to eliminate bottlenecks. For example, CLICK if the bottleneck is in getting data to our server you can have multiple front end connections attached to multiple media servers; CLICK if the bottleneck is getting data to disk then we provide the ability for multiple connections to the disk arrays. Before I continue displaying the scalability that VTFO provides let me take a minute to share with you a customer. case study: PAUSE We have a client who implemented a VTFO system because of tremendous issues they were experiencing with backup and recovery. Prior to implementing VTFO they had the occasion to execute a std. tape recovery of a server that contained 1M files. As all of the tapes were on-site, it took them 17 hrs to do a complete recovery. After implementing VTFO they found a dramatic improvement in their backup & recovery. As an example they had an occasion to have to recover another server, this time with 1.6 M files, 60% more files then before and they were able to do the recovery in 1 hour and 38 minutes. This led them to keep more data accessible on disk so they wanted to add more disk CLICK to the 29 TB they already had. PAUSE Contrast to competition with pre built appliances where that type of scalability isn’t possible. What would they do? They’d have to add another appliance, since it’s bundled, to include another software license, another processor, another server and disk, which means you wind up paying for more than you need. And you need to manage a separate appliance. PAUSE Now we don’t dictate what you must invest in when your environment changes and you need to expand. For example, should you want to address a larger library, you can cluster up to 4 servers CLICK Now as you know, servers have a lot of different capabilities. Our comp. will recommend a server for you that will potentially have fixed capabilities, i.e. a Dell 1750 with one internal 32 bit I/O bus, would perform significantly different than a server with three (3) 64 bit I/O busses ( CLICK to show larger servers) Again if the server’s the bottleneck we allow you to select servers of your choice to eliminate the bottleneck Now I’d like to take a few minutes to speak with you about how VTF Open functions in de-staging data from virtual tape to real tape …. Next Slide
When we first began this presentation we agreed that there are a number of challenges facing the data centers today. Adding a VTL into a backup environment is an easy way to relieve some if not all of those issues. PAUSE We also agreed that there is an ever increasing challenge today and that is finding a way to reduce the amount of data that you have to manage and protect. And only those vendors that can effectively give you the capability to reduce the amount of data will be able to provide an economical solution. PAUSE “ In order to dramatically improve the management and protection of data …a ‘game changing’ technology is required…
Diligent has changed the data protection game. PAUSE/CLICK With break-through technology that will reduce required disk back-up capacity on an average by a factor of 25 times or more… PAUSE/CLICK … thereby enabling to protect more by storing less… PAUSE/CLICK … at an acquisition cost below that of tape. A lot of vendors talk about total cost of ownership (TCO) and the fact that if you can invest in a disk based VTF today, over time you will realize a return that is greater than the available tape alternatives. However, many companies are restrained by their current budgets in making that investment. PAUSE But, Diligent has changed all that , by making the initial investment, or acquisition cost, of a ProtecTIER™ system less than the comparable tape based alternative available today.
ProtecTIER™ software runs on a Linux based server. PAUSE/CLICK ProtecTIER™ looks at storage systems as one large storage repository. This is unlike the backup application D2D, where each system is attached to a media server and only the media server that created the backup on the system has access to that system. PAUSE/CLICK A critical component of ProtecTIER™ is a patent pending factoring algorithm called HyperFactor. PAUSE HyperFactor has a memory resident index, like a table of contents, that can map the contents of a 1PB repository in 4GB of memory. That 250,000:1 ratio between the repository and the index is a significant differentiator for Diligent and has orders of magnitude greater granularity than anything in the market place. The HyperFactor index looks at a backup stream and finds data that already exists in the repository without doing any I/O. This feature functions even when the repository is up to a petabyte in size. PAUSE/CLICK To show how this works, we’ve depicted different data patterns in the repository with these multi-colored icons. PAUSE/CLICK Here you see a new backup data stream coming in from a backup app. This stream contains some data that already exists (as represented by the multi colored icons) and some data that’s new (as represented by the tan icons). PAUSE/CLICK Now the backup data stream passes through the HyperFactor “filter” which looks at all of the data patterns in the stream and uses the index to filter out the similar items and only store the delta while pointing to the existing data it needs. As a by product of this, one PB of disk can represent, on avg., 25PB of tape data. PAUSE I keep using the word similar because it’s not identical, and that’s because part of the algorithm’s power is that it uses similarities instead of identicals to achieve unmatched performance. The most similar pattern in the repository is found with NO I/O and then that data into brought to the server to do a computational compare and then store the delta. This is performed without impacting the search time, regardless of the repository size. PAUSE Because there is no I/O, we’re actually performing a memory search on an index, so the search time difference will not be noticeable whether it’s 10TB or a petabyte. The location and similarity of the data isn’t affected by naming conventions, shifts or offsets in position, because we are looking at the byte level of the data. PAUSE Couple of “key” points to remember: What happens if the index disappears? Remember the index is used to locate similarity in the repository, and in fact it’s not used in the restore process at all. If the backup applications data stream needed to be restored, this data that exists in the repository is self describing which means that a restore can be done without the index since the data itself tells me what is required to restore the stream. As we said, the index is important in finding similarity, not only is it in the server memory, but it’s also duplicated in two places on RAID protected disk and synchronized. Let’s look at the HyperFactor algorithm in a little more detail.
CLICK Because of the tremendous reduction of required disk capacity, a much smaller pipe is needed to transfer the data to a remote site. Now you have accomplished your disaster recovery, in addition to your backup. If you lose the primary site, the data can be fully accessed at the secondary site. CLICK 2 At the secondary site you may destage to tape. Note that the Backup Server in the secondary site is part of the same Domain as the master server at the primary site. This allows ProtecTIER™ to make the cartridges available to it through a different virtual library. Once at the remote site, the images on the virtual tapes can be vaulted to physical tapes. DTC will have a section of the best practices guide to describe how to implement this.
Slide 5: Retention times are getting longer Regulatory compliance has become a major burden to almost every organization. There are over 10,000 compliance laws that an enterprise may be subjected. This slide summarizes that certain vertical market segments are facing longer times that they MUST retain data….and make it available upon demand.
Slide 6: A typical enterprise archive environment Digital archiving is not a new phenomenon. Various departments and applications have been backing up their data since they first implemented computers. The problem is that this creates silos of information. The problems are that this type of arrangement does not scale well, and trying to search across silos is almost impossible….at least it is very, very expensive.
Slide 14: HCAP: How it works HCAP receives information from data creating applications such as email, document management, home grown applications, etc. When that information is ingested into the archive we first authenticate that information and assign a unique fingerprint. HCAP let’s a customer select from a variety of authentication algorithyms…such as MD5, SHA1, SHA256 and more. That information and its metadata are then reliably stored in the archive. The customer can choose the level of data protection…and HCAP will automatically maintain that selection. HCAP uses highly distributed techniques to ingest and store so that the archive will perform to the customer’s needs. In addition, data can be indexed in a separate and parallel processors so that ingestion and storage performance are not impacted. Once that information is stored and indexed, it can be easily and readily searched. We will explain the search features later. HCAP has been tested to operate using 80 processor nodes on over 2.5 PBs of storage…with over 2 billion user objects. No one has come close to these numbers, and we have not come close to seeing our top end of scalability. Our limits have been only based on the amount of storage and processing equipments used in our labs. Third party validation has also been secured.
09/07/12
Non-disruptive service HCAP has been designed to never lose data. In addition, high availability features are built in to make sure the user has continuous access. Policies enforce data preservation and retention, and the clustering software provides for failures without impact…called self-healing, and recovery without effort …called self-configuration. For continuous scaling the cluster also provides for automatic load balancing. The software looks for low water mark thresholds, and then starts distributing data and work to other processors and storage. As the customer adds more processing and storage, the clustering software automatically continues to take advantage of the additional resources. As the cluster is self-healing service can be provided at a “relaxed” pace. If a disk or processor fails, the system adjusts. When the failed resources are replaced, the system reconfigures and rebalances. Remote serviceability tools enable both Hitachi and the user to investigate problems and schedule routine maintenance activities. Customers should think of HCAP as a “set and forget” type solution. * Requires remote connectivity
HCAP Fully-integrated appliance includes: Includes : Hitachi Content Archiver V2.0 software 1U Server Nodes (8GB memory) – start with two, scaling up in pairs Two Ethernet Switches Two FC Switches (16 port expandable) WMS Array (controllers + disk; RAID 6) 42U Rack Redundant connectivity, pre-cabled
Teniendo en cuenta las necesidades de nuestros clientes, Hitachi presentó su estrategia Application Optimized Storage en Mayo de 2004. Nuestro objetivo es alinear los recursos IT con los objetivos de negocios de nuestros clientes, para obtener el máximo beneficio. Esta alineación de IT y objetivos de negocio implican mucho más que simplemente administrar datos a través de su ciclo de vida. Implica un entendimiento de las necesidades del negocio para desarrollar, gestionar e implementar una infraestructura de almacenamiento que permita optimizar la disponibilidad de la información apoyando las necesidades de las aplicaciones de negocios en todo momento. Para direccionar este problema complejo, las soluciones HDS Application Optimized Storage, almacenamiento optimizado por aplicación, están basadas en un framework integrado de hardware, software y servicios que incluyen: aplicaciones, contenidos, datos, y servicios de almacenamiento como representa este gráfico. Nuestra visión cambia la forma en que los clientes ejecutan sus estrategias de almacenamiento, con beneficios tangibles. Explicación detallada del gráfico: Each component in the framework plays a critical role in an overall solution, so let’s talk about each layer in more detail: Application Services Application services provide the application-centric infrastructure management that is critical to enterprises today and are comprised of the application modules of the HiCommand® Storage Area Management Suite. Application services correlate the availability of business-critical applications with storage network capacity and performance, provide logical-to-physical application path management, and enable application optimization by aligning storage resources with business needs. Application Services are delivered through a product set that includes the following application management modules: - HiCommand QoS Modules - HiCommand QoS for Oracle® - HiCommand QoS for Sybase® - HiCommand QoS for Microsoft® Exchange - HiCommand QoS for File Servers - HiCommand Chargeback Module - HiCommand Tuning Manager These Application Services tools are all integrated and allow robust management of the enterprise’s storage infrastructure from application to disk. Content Services Companies run a wide variety of applications in support of their business processes and understanding the lifecycle requirements of the data generated by these applications is a critical component of Application Optimized Storage. Therefore, content services represent any applications that provide the ability to index, store, search, and retrieve information. These applications, including databases, messaging, file systems, ERP, and CRM, are all considered content services and provide critical information about the lifecycle requirements of application data. Application Optimized Storage solutions use this application awareness to appropriately optimize storage infrastructure to meet application requirements. Unlike other storage vendors who have chosen to deliver their own proprietary application solutions, Hitachi Data Systems is committed to an open, collaborative approach. We partner with leading application vendors including IBM, Microsoft, OpenText (IXOS), Oracle, and Sybase, providing customers with the flexibility to choose the applications they need to support their business. Two examples of Content Services offered by Hitachi Data Systems include: Message Archive for E-mail Message Archive for E-mail, powered by IXOS software, provides users with a limitless mailbox by seamlessly offloading messages and attachments to archival storage. This lowers e-mail server loads and greatly reduces the number of servers and software licenses required to support a given e-mail user population, thereby improving their efficiency and performance and lowering total cost of ownership. Message Archive for E-mail improves productivity as it reduces the time users and IT administrators alike spend managing e-mail, minimizes costs, and expedites retrieval of e-mails required for legal discovery or auditing purposes. Message Archive for Compliance The Message Archive for Compliance solution helps customers optimize their e-mail systems while providing message indexing, search and retrieval capabilities, audit trails, and policy management to preserve messages for mandatory retention periods. Message Archive for Compliance combines Hitachi storage with Hitachi Data Retention Utility software for WORM protection, IXOS archive software including Compliance Package, and Hitachi Data Systems implementation services. It enables companies to retain an unalterable archive of e-mail and instant messages for the fixed period of time mandated by SEC Rule 17a-4, Sarbanes-Oxley, Basel II, and other regulatory requirements. With these archiving solutions as starting points, Hitachi Data Systems will roll out additional Content Services for applications in areas such as rich media and health care. Data Services A common set of data management tools is a key component of Application Optimized Storage. Based upon an understanding of application storage requirements, storage cost, performance, functionality, and availability can be optimized using comprehensive data management tools for backup, migration, replication, and security. Data Services products from Hitachi Data Systems include: - Hitachi HiCopy Cross-System Copy software - Hitachi CopyCentral z/OS® Business Continuity Manager software - Hitachi QuickShadow™ Copy-on-Write Snapshot software - Hitachi ShadowImage™ In-System Replication software - Hitachi TrueCopy™ Remote Replication software - Hitachi Data Retention Utility software Hitachi Data Systems is recognized as a leading provider of copy and data protection products as well as associated business continuity and data migration design and implementation services in both open systems and mainframe environments. These products and skills are essential for tiered storage deployments which match data value to appropriate classes of storage systems. Storage Services Storage services provide the foundation for all Application Optimized Storage solutions by providing a heterogeneous, multi-tier storage infrastructure supported by common storage management tools. This architecture allows the exact matching of application priority policies and storage infrastructure across an unmatched range of high-end enterprise and midrange storage products that provide a broad selection of performance, availability, functionality, and price/performance attributes. The components of storage services are heterogeneous, multi-tier infrastructure, connectivity, and common management: Infrastructure Hitachi Lightning 9900™ V Series enterprise storage systems Hitachi Lightning 9900™ V Series enterprise storage systems provide seamless scalability with nondisruptive expansion to over 140TB to simplify your storage infrastructure through massive consolidation. When combined with Hitachi storage software and the HiCommand® Storage Area Management Suite, these systems support Application Optimized Storage™ solutions, enable “set and forget” management, protect data assets, and optimize resources. Lightning 9900 V Series systems are powered by the Hi-Star™ crossbar switch architecture. This assures you of no single point of failure and instant, 24/7 data access. We even back it up with a 100% data availability guarantee. The Lightning 9900 V Series systems support not only open systems, but also mainframe environments through FICON and ESCON as well as copy software compatibility. Recently the Enterprise Storage Group reports the Lightning 9980V storage system is unsurpassed for the kind of high-end, multidimensional scalability required for serious storage consolidation. Hitachi Thunder 9500™ V Series modular storage systems The Hitachi Thunder 9500 V Series modular storage systems provide with industry-leading (up to 64TB) capacity, performance, and connectivity in a small footprint. These systems can grow with your business, addressing applications such as data replication, message archiving, and regulatory compliance. For economical information lifecycle management, match the cost of storage to the value of your data by tiering storage down from Lightning 9900™ V Series enterprise systems to lower-cost Thunder 9500™ V Series models. SATA Intermix Option New global regulatory requirements are driving demand for automated storage solutions that simplify the management and migration of data throughout the entire data lifecycle. The Serial ATA (SATA) Intermix Option for the Thunder 9500 V Series of modular storage systems can be added to existing Thunder 9585V™, Thunder 9580V™ and Thunder 9570V™ high-end modular storage systems, enabling customers to create the world’s first “DLM in a box”—high-speed Fibre Channel and lower cost native SATA tiered within one storage system. Connectivity Storage Area Networks SANs are an essential part of Hitachi Data Systems delivery of Application Optimized Storage solutions. SANs make large storage pools shareable across the enterprise, centralize storage management, and dramatically improve storage utilization, resulting in lower costs. Yet they can simultaneously provide better performance for the applications that drive business. Our SAN solutions encompass storage systems, switches, servers, management software, multi-protocol support, services and other storage network components developed by Hitachi, our alliance partners, and third party providers. Working with the storage networking industry leaders, such as Brocade, Cisco, CNT, and McDATA, Hitachi Data Systems provides extensive connectivity options, including IP (iSCSI, FCIP, iFCP) and Fibre Channel configurations. In addition, the Lightning family of storage systems supports both ESCON and FICON protocols for mainframe connectivity concurrently with open systems protocols. This makes Lightning family the platform of choice for massive consolidation projects. Virtual Storage Ports/Host Storage Domains Virtual storage ports, available in both the Lightning 9900 V Series and Thunder 9500 V Series storage systems, enable each Fibre Channel physical port to support 128 heterogeneous open systems servers. Each server has its own secure storage partition and bootable LUN 0 through Host Storage Domains. This capability simplifies the storage network infrastructure, eases management, and enables large-scale consolidation, resulting in lower TCO. Network Attached Storage For many applications, especially Web, design, and medical, the concern is not bandwidth but file access response time. Hitachi solutions for NAS, the HDS-NetApp® Enterprise NAS Gateways and the Lightning NAS Blade, help deliver cost-efficient storage utilization across the enterprise. Common Storage Management Common Storage Management is achieved through standards-based rich management tools that provide IT executives with a Single Point of Control for both application and infrastructure requirements. To fully benefit from Application Optimized Storage, all of these elements, including business continuity characteristics, array performance, and network fabric, need to be defined, managed, and mapped to what business requires from the applications in order to optimize delivery of value to the business. Common Storage Management is perhaps the most important component of Application Optimized Storage. Rather than provide end-users with disparate interfaces for disparate platforms, essentially resulting in multiple islands of storage and inaccessible information, Hitachi Data Systems provides customers with the same software, the same management interfaces, and the same tool sets to manage all heterogeneous storage systems from a single console. The final key components of Application Optimized Storage are Services and Best Practices. To ensure organizations maximize their investment in Application Optimized Storage solutions, Hitachi Data Systems offers a comprehensive suite of technology, storage, education, and professional services, as illustrated in Global Solution Services consultants can help you plan, design, implement, integrate, manage, and optimize storage infrastructure solutions that meet your needs. Areas in which our consultants assist customers include: - Industry Solutions—Enterprise content archival solutions that incorporate hardware, software, and professional services to address your business and regulatory compliance requirements. - Application Optimized Solutions—Bridge the gap between business applications and IT’s ability to precisely deliver service levels with GSS strategic consulting, design integration and robust deployment capabilities. - Storage Services—Services that apply proven best practices along with appropriate tools and training to help you to plan, design, implement, integrate, manage, optimize, and maintain your storage infrastructure. - Product-Based Services—Implementation, simplification, and optimized ROI and TCO for Hitachi Data Systems and select third-party products. - Education Services—Help you to improve your staff efficacy and efficiency in implementing and supporting multi-vendor storage solutions. Aligning IT and business objectives is far more complex than simply managing data through its lifecycle. It’s about IT understanding the needs of the business and building, managing, and adapting a storage infrastructure to optimize data delivery in support of the myriad needs of the applications businesses rely upon. To address this complex problem, Application Optimized Storage solutions are based upon an integrated framework of hardware and software services including application, content, data, and storage services as represented in the figure above. Each component in the framework plays a critical role in an overall solution, so let’s talk about each layer in more detail: Application Services Application services provide the application-centric infrastructure management that is critical to enterprises today and are comprised of the application modules of the HiCommand® Storage Area Management Suite. Application services correlate the availability of business-critical applications with storage network capacity and performance, provide logical-to-physical application path management, and enable application optimization by aligning storage resources with business needs. Application Services are delivered through a product set that includes the following application management modules: - HiCommand QoS Modules - HiCommand QoS for Oracle® - HiCommand QoS for Sybase® - HiCommand QoS for Microsoft® Exchange - HiCommand QoS for File Servers - HiCommand Chargeback Module - HiCommand Tuning Manager These Application Services tools are all integrated and allow robust management of the enterprise’s storage infrastructure from application to disk. Content Services Companies run a wide variety of applications in support of their business processes and understanding the lifecycle requirements of the data generated by these applications is a critical component of Application Optimized Storage. Therefore, content services represent any applications that provide the ability to index, store, search, and retrieve information. These applications, including databases, messaging, file systems, ERP, and CRM, are all considered content services and provide critical information about the lifecycle requirements of application data. Application Optimized Storage solutions use this application awareness to appropriately optimize storage infrastructure to meet application requirements. Unlike other storage vendors who have chosen to deliver their own proprietary application solutions, Hitachi Data Systems is committed to an open, collaborative approach. We partner with leading application vendors including IBM, Microsoft, OpenText (IXOS), Oracle, and Sybase, providing customers with the flexibility to choose the applications they need to support their business. Two examples of Content Services offered by Hitachi Data Systems include: Message Archive for E-mail Message Archive for E-mail, powered by IXOS software, provides users with a limitless mailbox by seamlessly offloading messages and attachments to archival storage. This lowers e-mail server loads and greatly reduces the number of servers and software licenses required to support a given e-mail user population, thereby improving their efficiency and performance and lowering total cost of ownership. Message Archive for E-mail improves productivity as it reduces the time users and IT administrators alike spend managing e-mail, minimizes costs, and expedites retrieval of e-mails required for legal discovery or auditing purposes. Message Archive for Compliance The Message Archive for Compliance solution helps customers optimize their e-mail systems while providing message indexing, search and retrieval capabilities, audit trails, and policy management to preserve messages for mandatory retention periods. Message Archive for Compliance combines Hitachi storage with Hitachi Data Retention Utility software for WORM protection, IXOS archive software including Compliance Package, and Hitachi Data Systems implementation services. It enables companies to retain an unalterable archive of e-mail and instant messages for the fixed period of time mandated by SEC Rule 17a-4, Sarbanes-Oxley, Basel II, and other regulatory requirements. With these archiving solutions as starting points, Hitachi Data Systems will roll out additional Content Services for applications in areas such as rich media and health care. Data Services A common set of data management tools is a key component of Application Optimized Storage. Based upon an understanding of application storage requirements, storage cost, performance, functionality, and availability can be optimized using comprehensive data management tools for backup, migration, replication, and security. Data Services products from Hitachi Data Systems include: - Hitachi HiCopy Cross-System Copy software - Hitachi CopyCentral z/OS® Business Continuity Manager software - Hitachi QuickShadow™ Copy-on-Write Snapshot software - Hitachi ShadowImage™ In-System Replication software - Hitachi TrueCopy™ Remote Replication software - Hitachi Data Retention Utility software Hitachi Data Systems is recognized as a leading provider of copy and data protection products as well as associated business continuity and data migration design and implementation services in both open systems and mainframe environments. These products and skills are essential for tiered storage deployments which match data value to appropriate classes of storage systems. Storage Services Storage services provide the foundation for all Application Optimized Storage solutions by providing a heterogeneous, multi-tier storage infrastructure supported by common storage management tools. This architecture allows the exact matching of application priority policies and storage infrastructure across an unmatched range of high-end enterprise and midrange storage products that provide a broad selection of performance, availability, functionality, and price/performance attributes. The components of storage services are heterogeneous, multi-tier infrastructure, connectivity, and common management: Infrastructure Hitachi Lightning 9900™ V Series enterprise storage systems Hitachi Lightning 9900™ V Series enterprise storage systems provide seamless scalability with nondisruptive expansion to over 140TB to simplify your storage infrastructure through massive consolidation. When combined with Hitachi storage software and the HiCommand® Storage Area Management Suite, these systems support Application Optimized Storage™ solutions, enable “set and forget” management, protect data assets, and optimize resources. Lightning 9900 V Series systems are powered by the Hi-Star™ crossbar switch architecture. This assures you of no single point of failure and instant, 24/7 data access. We even back it up with a 100% data availability guarantee. The Lightning 9900 V Series systems support not only open systems, but also mainframe environments through FICON and ESCON as well as copy software compatibility. Recently the Enterprise Storage Group reports the Lightning 9980V storage system is unsurpassed for the kind of high-end, multidimensional scalability required for serious storage consolidation. Hitachi Thunder 9500™ V Series modular storage systems The Hitachi Thunder 9500 V Series modular storage systems provide with industry-leading (up to 64TB) capacity, performance, and connectivity in a small footprint. These systems can grow with your business, addressing applications such as data replication, message archiving, and regulatory compliance. For economical information lifecycle management, match the cost of storage to the value of your data by tiering storage down from Lightning 9900™ V Series enterprise systems to lower-cost Thunder 9500™ V Series models. SATA Intermix Option New global regulatory requirements are driving demand for automated storage solutions that simplify the management and migration of data throughout the entire data lifecycle. The Serial ATA (SATA) Intermix Option for the Thunder 9500 V Series of modular storage systems can be added to existing Thunder 9585V™, Thunder 9580V™ and Thunder 9570V™ high-end modular storage systems, enabling customers to create the world’s first “DLM in a box”—high-speed Fibre Channel and lower cost native SATA tiered within one storage system. Connectivity Storage Area Networks SANs are an essential part of Hitachi Data Systems delivery of Application Optimized Storage solutions. SANs make large storage pools shareable across the enterprise, centralize storage management, and dramatically improve storage utilization, resulting in lower costs. Yet they can simultaneously provide better performance for the applications that drive business. Our SAN solutions encompass storage systems, switches, servers, management software, multi-protocol support, services and other storage network components developed by Hitachi, our alliance partners, and third party providers. Working with the storage networking industry leaders, such as Brocade, Cisco, CNT, and McDATA, Hitachi Data Systems provides extensive connectivity options, including IP (iSCSI, FCIP, iFCP) and Fibre Channel configurations. In addition, the Lightning family of storage systems supports both ESCON and FICON protocols for mainframe connectivity concurrently with open systems protocols. This makes Lightning family the platform of choice for massive consolidation projects. Virtual Storage Ports/Host Storage Domains Virtual storage ports, available in both the Lightning 9900 V Series and Thunder 9500 V Series storage systems, enable each Fibre Channel physical port to support 128 heterogeneous open systems servers. Each server has its own secure storage partition and bootable LUN 0 through Host Storage Domains. This capability simplifies the storage network infrastructure, eases management, and enables large-scale consolidation, resulting in lower TCO. Network Attached Storage For many applications, especially Web, design, and medical, the concern is not bandwidth but file access response time. Hitachi solutions for NAS, the HDS-NetApp® Enterprise NAS Gateways and the Lightning NAS Blade, help deliver cost-efficient storage utilization across the enterprise. Common Storage Management Common Storage Management is achieved through standards-based rich management tools that provide IT executives with a Single Point of Control for both application and infrastructure requirements. To fully benefit from Application Optimized Storage, all of these elements, including business continuity characteristics, array performance, and network fabric, need to be defined, managed, and mapped to what business requires from the applications in order to optimize delivery of value to the business. Common Storage Management is perhaps the most important component of Application Optimized Storage. Rather than provide end-users with disparate interfaces for disparate platforms, essentially resulting in multiple islands of storage and inaccessible information, Hitachi Data Systems provides customers with the same software, the same management interfaces, and the same tool sets to manage all heterogeneous storage systems from a single console. The final key components of Application Optimized Storage are Services and Best Practices. To ensure organizations maximize their investment in Application Optimized Storage solutions, Hitachi Data Systems offers a comprehensive suite of technology, storage, education, and professional services, as illustrated in Global Solution Services consultants can help you plan, design, implement, integrate, manage, and optimize storage infrastructure solutions that meet your needs. Areas in which our consultants assist customers include: - Industry Solutions—Enterprise content archival solutions that incorporate hardware, software, and professional services to address your business and regulatory compliance requirements. - Application Optimized Solutions—Bridge the gap between business applications and IT’s ability to precisely deliver service levels with GSS strategic consulting, design integration and robust deployment capabilities. - Storage Services—Services that apply proven best practices along with appropriate tools and training to help you to plan, design, implement, integrate, manage, optimize, and maintain your storage infrastructure. - Product-Based Services—Implementation, simplification, and optimized ROI and TCO for Hitachi Data Systems and select third-party products. - Education Services—Help you to improve your staff efficacy and efficiency in implementing and supporting multi-vendor storage solutions.
Our calendar 2008 company outlook – The big message here is that Hitachi is leading the industry in storage virtualization! We are the leaders in storage virtualization, and there are several significant proofpoints to support this. Starting with… 1) Hitachi is really the only company with storage virtualization technology in its flagship products. If you think about it, Hitachi’s USP and NSC, our flagship enterprise storage Virtualization offerings, have virtualization technology embedded in them. Whereas if you look at our competitors such as EMC, if you want virtualization it’s not embedded in their flagship offering, DMX. You’d have to buy the DMX and you’d have to buy Invista, their virtualization offering, which is a peripheral switch hybrid-type device. If you’d want to buy virtualization from IBM, it’s not available in their flagship DS 8400 product. It’s available in the form of an appliance that sits in the network, or a product called the SBC. So, again, Hitachi is so dedicated to storage virtualization, our flagship products are embedded with virtualization technology. That is a true differentiator in the market. 2) Additionally, Hitachi pioneered a revolutionary storage architecture. With its Intelligent Virtual Controllers, we have separated the brain from the body of storage, or the innovation and the intelligence from the commodity, the body being the disks. And that’s enabled us to disrupt the markets, once again, such like we did when we introduced High Star Architecture in 2000 (we’ll touch on that as well). 3) This last bullet covers our overall outlook for the year. We believe we exhibit the highest levels of hardware and software sophistication. This is demonstrated by our platform direction and our portfolio of common storage services. Hitachi is truly the only company that can provide customers with a single replication engine and a single management interface across all storage assets -- regardless of cost, manufacturer, type, price band, etc. It’s truly the most advanced common storage services across all platforms available in the market today.
Some Interesting Facts: 20% Structured Data (databases, transactional, data warehouses) 80% Unstructured (objects and files) and Semi-structured (e-mail) Data - <5% of unstructured data is managed through content management….and shrinking - Unstructured Data is growing at 10X the rate of Structured Data (Files, Email, Content) - 2,272 PB of Unstructured Data Today, 20,000PB in 2010…Most is dormant after 90 days. ESG. Value of the File….Content Is King - File Attributes help basic classification - Content Attributes (Metadata) enables extra classification, extra descriptions - Content inside the file enables text searching…informational value