This document discusses techniques for implementing storage tiering to simplify management, lower costs, and increase performance. It describes using IBM's Easy Tier technology to automatically move data between tiers of flash, disk, and tape storage based on I/O density and age. The tiers include flash, solid state drives, enterprise HDDs, and nearline HDDs. Easy Tier measures activity every 5 minutes and moves hot data to faster tiers and cold data to slower tiers with little administration needed. Case studies show how storage tiering saved IBM Global Accounts $17 million in one year and $90 million over 5 years by optimizing data placement across tiers.
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
This document provides an overview of a technical university session on business continuity and disaster recovery. It discusses the seven tiers of BCDR planning from basic backup and restore to fully automated failover. The sessions will cover topics like IBM storage options, information lifecycle management, and reducing data footprint. Daily sessions are listed on recovery strategies, hybrid cloud, and managing risk. The document outlines the components of a BCDR plan including high availability, continuous operations, and disaster recovery.
This document discusses using IBM Spectrum Control to optimize storage utilization. It describes three scenarios: [1] Determining the appropriate storage tier and type needed when more capacity is required, [2] Evenly redistributing workloads after new storage is added, and [3] Identifying where to move volumes when retiring aging storage devices. It provides steps for analyzing volume and pool usage data to identify optimization opportunities and recommendations for improving storage performance and costs.
An overview of IBM Archive solutions, explaining why these are better than keeping monthly backup tapes for seven years to meet long-term retention requirements. Presented in Orlando, FL IBM Technical University.
This document summarizes Tony Pearson's presentation on business continuity and disaster recovery. It discusses the seven tiers of business continuity, from tape backups stored offsite to real-time continuous data replication. It emphasizes having a plan to recover critical business processes within defined time periods. The presentation also provides an overview of replication technologies and metrics like recovery point objective and recovery time objective to achieve various tiers of business continuity and disaster recovery.
An overview of IBM Advanced Analytics available in Virtual Storage Center and Spectrum Control Advanced Edition, including pool balancing, storage tiering, and data migration. Presented in Orlando, FL IBM Technical University.
IBM Spectrum Virtualize v8.2.0 now supports 25GbE TCP/IP offload engine (TOE) cards and Deduplication with Data Reduction Pools. This session covers the latest features of version 8.1 and 8.2
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
This document provides an overview of a technical university session on business continuity and disaster recovery. It discusses the seven tiers of BCDR planning from basic backup and restore to fully automated failover. The sessions will cover topics like IBM storage options, information lifecycle management, and reducing data footprint. Daily sessions are listed on recovery strategies, hybrid cloud, and managing risk. The document outlines the components of a BCDR plan including high availability, continuous operations, and disaster recovery.
This document discusses using IBM Spectrum Control to optimize storage utilization. It describes three scenarios: [1] Determining the appropriate storage tier and type needed when more capacity is required, [2] Evenly redistributing workloads after new storage is added, and [3] Identifying where to move volumes when retiring aging storage devices. It provides steps for analyzing volume and pool usage data to identify optimization opportunities and recommendations for improving storage performance and costs.
An overview of IBM Archive solutions, explaining why these are better than keeping monthly backup tapes for seven years to meet long-term retention requirements. Presented in Orlando, FL IBM Technical University.
This document summarizes Tony Pearson's presentation on business continuity and disaster recovery. It discusses the seven tiers of business continuity, from tape backups stored offsite to real-time continuous data replication. It emphasizes having a plan to recover critical business processes within defined time periods. The presentation also provides an overview of replication technologies and metrics like recovery point objective and recovery time objective to achieve various tiers of business continuity and disaster recovery.
An overview of IBM Advanced Analytics available in Virtual Storage Center and Spectrum Control Advanced Edition, including pool balancing, storage tiering, and data migration. Presented in Orlando, FL IBM Technical University.
IBM Spectrum Virtualize v8.2.0 now supports 25GbE TCP/IP offload engine (TOE) cards and Deduplication with Data Reduction Pools. This session covers the latest features of version 8.1 and 8.2
Are you ready for NVMe? IBM FlashSystem uses NVMe inside, and is NVMe-ready for use with FCP and Ethernet fabrics. This session explains FC-NVMe and NVMe-OF and how IBM FlashSystem uses NVMe inside.
IBM invented Copy Data Management in 1998. This session explains the different products IBM Spectrum Protect Snapshot, IBM Spectrum Protect Plus, and IBM Spectrum Copy Data Management. The copies are not just for data protection and disaster recovery, but can also be reused for DevOps, Reporting or Analytics.
IBM Spectrum NAS is our latest Software Defined Storage for SMB and NFS protocol-based storage. This session shows how it is designed and architected, and how to deploy it in less than one day.
IBM Cloud Object Storage System, presented Oct 16, 2017 at IBM Systems Technical University in New Orleans, LA. This covers the object storage from IBM from the acquisition of Cleversafe, formerly known as DSnet product.
This session provides historical context of storage infrastructure over the past 5 decades, to help explain the rise in Converged and Hyperconverged Infrastructure
- The document discusses IBM's cloud storage options, including IBM XIV, SAN Volume Controller, Elastic Storage Server, IBM Spectrum Archive, and IBM Spectrum Storage software-defined storage offerings. It also covers unified file and object storage with IBM Spectrum Scale and IBM Cloud Object Storage.
- The presentation covers topics such as business continuity, IBM's cloud storage options, IBM Cloud Object Storage, converged and hyperconverged environments, storage tiering, and IBM Spectrum Scale for file and object storage.
- IBM offers various cloud storage solutions including block, file, object, reference, hosted, ephemeral, and persistent storage options that can be deployed on-premises or off-premises.
This document discusses a presentation by Tony Pearson and Rivka Matosevich of IBM on managing risks with data footprint reduction. The presentation will cover:
1. Introduction to data footprint reduction technologies like compression, thin provisioning, and data deduplication.
2. How these technologies impact storage management and the risks associated with each technique.
3. Details on IBM FlashSystem storage solutions like the A9000 and A9000R and how they address data footprint reduction.
4. A demonstration of the Hyper-Scale Manager GUI for controlling and managing the risks of data footprint reduction technologies.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
What keeps you up at night? Is managing your storage infrastructure giving you sleepless nights, or is it a dream come true! This session will introduce IBM Spectrum Control, IBM Spectrum Connect, IBM Copy Services Manager, IBM Storage Insights and Insights Pro
The document outlines an agenda for a technical university session covering concepts of file and object storage, IBM NAS solutions like Spectrum NAS, Spectrum Scale, and Cloud Object Storage. It then describes how to use the File and Object Storage Design Engine studio, a pre-sales sizing tool, to generate designs for these IBM solutions based on user requirements. The presenter will demonstrate the tool using IBM Spectrum NAS as an example.
The document discusses IBM's hybrid cloud storage solutions and how various IBM storage products integrate with OpenStack. It provides an overview of OpenStack and how IBM storage such as IBM Spectrum Virtualize, XIV, DS8000, A9000, Spectrum Scale and Spectrum Protect integrate with OpenStack. It also outlines Tony Pearson's speaking schedule for the week which includes topics on IBM Cloud Object Storage and IBM hybrid cloud storage solutions.
S sy0883 smarter-storage-strategy-edge2015-v4Tony Pearson
IBM Smarter Storage Strategy explains IBM's direction for its IBM System Storage product line. This includes support for Big Data analytics, optimizing for traditional workloads, and helping clients transition to Cloud.
This document provides an overview of IBM Cloud Object Storage. It discusses how object storage differs from block and file storage by allowing unlimited scalability. It describes IBM's acquisition of Cleversafe and how its erasure coding technology reduces storage costs by up to 70% compared to traditional RAID solutions. The document outlines the architecture and functionality of IBM Cloud Object Storage, including how data is ingested, stored across geographic locations in a highly available manner, and retrieved in the event of failures.
This document provides an overview of IBM's Cloud Object Storage system, which was acquired through their purchase of Cleversafe. It discusses how object storage differs from block and file storage in its use of objects rather than files or blocks. The system uses erasure coding to distribute data across multiple sites, providing redundancy to tolerate failures while reducing storage costs by up to 70%. The document outlines the architecture and benefits of IBM's Cloud Object Storage system.
Data Footprint Reduction: Understanding IBM Storage OptionsTony Pearson
This document provides an overview of a presentation given by Tony Pearson and Sanjay S Bhikot on data footprint reduction technologies available from IBM, including thin provisioning, space-efficient copying, data deduplication, and compression. The presentation covers the history and functionality of each technology, as well as how they are implemented in various IBM storage products to help reduce storage costs.
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Are you ready for NVMe? IBM FlashSystem uses NVMe inside, and is NVMe-ready for use with FCP and Ethernet fabrics. This session explains FC-NVMe and NVMe-OF and how IBM FlashSystem uses NVMe inside.
IBM invented Copy Data Management in 1998. This session explains the different products IBM Spectrum Protect Snapshot, IBM Spectrum Protect Plus, and IBM Spectrum Copy Data Management. The copies are not just for data protection and disaster recovery, but can also be reused for DevOps, Reporting or Analytics.
IBM Spectrum NAS is our latest Software Defined Storage for SMB and NFS protocol-based storage. This session shows how it is designed and architected, and how to deploy it in less than one day.
IBM Cloud Object Storage System, presented Oct 16, 2017 at IBM Systems Technical University in New Orleans, LA. This covers the object storage from IBM from the acquisition of Cleversafe, formerly known as DSnet product.
This session provides historical context of storage infrastructure over the past 5 decades, to help explain the rise in Converged and Hyperconverged Infrastructure
- The document discusses IBM's cloud storage options, including IBM XIV, SAN Volume Controller, Elastic Storage Server, IBM Spectrum Archive, and IBM Spectrum Storage software-defined storage offerings. It also covers unified file and object storage with IBM Spectrum Scale and IBM Cloud Object Storage.
- The presentation covers topics such as business continuity, IBM's cloud storage options, IBM Cloud Object Storage, converged and hyperconverged environments, storage tiering, and IBM Spectrum Scale for file and object storage.
- IBM offers various cloud storage solutions including block, file, object, reference, hosted, ephemeral, and persistent storage options that can be deployed on-premises or off-premises.
This document discusses a presentation by Tony Pearson and Rivka Matosevich of IBM on managing risks with data footprint reduction. The presentation will cover:
1. Introduction to data footprint reduction technologies like compression, thin provisioning, and data deduplication.
2. How these technologies impact storage management and the risks associated with each technique.
3. Details on IBM FlashSystem storage solutions like the A9000 and A9000R and how they address data footprint reduction.
4. A demonstration of the Hyper-Scale Manager GUI for controlling and managing the risks of data footprint reduction technologies.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
The Pendulum Swings Back - Understanding Converged and Hyperconverged Integrated Systems, presented Oct 17, 2017 at IBM Systems Technical University, New Orleans LA
What keeps you up at night? Is managing your storage infrastructure giving you sleepless nights, or is it a dream come true! This session will introduce IBM Spectrum Control, IBM Spectrum Connect, IBM Copy Services Manager, IBM Storage Insights and Insights Pro
The document outlines an agenda for a technical university session covering concepts of file and object storage, IBM NAS solutions like Spectrum NAS, Spectrum Scale, and Cloud Object Storage. It then describes how to use the File and Object Storage Design Engine studio, a pre-sales sizing tool, to generate designs for these IBM solutions based on user requirements. The presenter will demonstrate the tool using IBM Spectrum NAS as an example.
The document discusses IBM's hybrid cloud storage solutions and how various IBM storage products integrate with OpenStack. It provides an overview of OpenStack and how IBM storage such as IBM Spectrum Virtualize, XIV, DS8000, A9000, Spectrum Scale and Spectrum Protect integrate with OpenStack. It also outlines Tony Pearson's speaking schedule for the week which includes topics on IBM Cloud Object Storage and IBM hybrid cloud storage solutions.
S sy0883 smarter-storage-strategy-edge2015-v4Tony Pearson
IBM Smarter Storage Strategy explains IBM's direction for its IBM System Storage product line. This includes support for Big Data analytics, optimizing for traditional workloads, and helping clients transition to Cloud.
This document provides an overview of IBM Cloud Object Storage. It discusses how object storage differs from block and file storage by allowing unlimited scalability. It describes IBM's acquisition of Cleversafe and how its erasure coding technology reduces storage costs by up to 70% compared to traditional RAID solutions. The document outlines the architecture and functionality of IBM Cloud Object Storage, including how data is ingested, stored across geographic locations in a highly available manner, and retrieved in the event of failures.
This document provides an overview of IBM's Cloud Object Storage system, which was acquired through their purchase of Cleversafe. It discusses how object storage differs from block and file storage in its use of objects rather than files or blocks. The system uses erasure coding to distribute data across multiple sites, providing redundancy to tolerate failures while reducing storage costs by up to 70%. The document outlines the architecture and benefits of IBM's Cloud Object Storage system.
Data Footprint Reduction: Understanding IBM Storage OptionsTony Pearson
This document provides an overview of a presentation given by Tony Pearson and Sanjay S Bhikot on data footprint reduction technologies available from IBM, including thin provisioning, space-efficient copying, data deduplication, and compression. The presentation covers the history and functionality of each technology, as well as how they are implemented in various IBM storage products to help reduce storage costs.
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
This document provides an overview of Oracle's product direction for tiered storage solutions. It discusses trends like massive data growth that are forcing customers to rethink data management and adopt tiered storage strategies. Oracle's solutions are intended to optimize data protection and archival by matching the cost of storage to the use and value of information through the use of flash, disk, and tape technologies arranged in a tiered architecture. The document highlights benefits like the lowest total cost of ownership.
Inter connect2016 yss1841-cloud-storage-options-v4Tony Pearson
This session will cover private and public cloud storage options, including flash, disk and tape, to address the different types of cloud storage requirements. It will also explain the use of Active File Management for local space management and global access to files, and support for file-and-sync.
In this webinar join experts from Storage Switzerland and Tegile to discover if the All-Flash Data Center can become reality. We will explore the return on investment that All-Flash systems can deliver, like increase user and virtual machine densities, lower drive counts and simpler storage architectures. We will also look at some of the methods that All-Flash systems employ to deliver an acceptable cost per GB like thin provisioning, clones, deduplication and compression. Finally we will take one last look at disk, does it have a role in the All-Flash Data Center and if it does what should that role be?
Building a High Performance Analytics PlatformSantanu Dey
The document discusses using flash memory to build a high performance data platform. It notes that flash memory is faster than disk storage and cheaper than RAM. The platform utilizes NVMe flash drives connected via PCIe for high speed performance. This allows it to provide in-memory database speeds at the cost and density of solid state drives. It can scale independently by adding compute nodes or storage nodes. The platform offers a unified database for both real-time and analytical workloads through common APIs.
This document discusses various options for deploying solid state drives (SSDs) in the data center to address storage performance issues. It describes all-flash arrays that use only SSDs, hybrid arrays that combine SSDs and hard disk drives, and server-side flash caching. Key points covered include the performance benefits of SSDs over HDDs, different types of SSDs, form factors, deployment architectures like all-flash arrays from vendors, hybrid arrays, server-side caching software, virtual storage appliances, and hyperconverged infrastructure systems. Choosing the best solution depends on factors like performance needs, capacity, data services required, and budget.
Application acceleration from the data storage perspectiveInterop
The document discusses new advances in caching and solid state storage for accelerating application performance. It describes how solid state drives (SSDs) offer significantly higher input/output performance than spinning hard disks. SSDs can be used to cache frequently accessed data and improve performance for databases, file systems, virtualized applications, and other workloads limited by random disk access. The document provides examples of inserting SSDs at different points in storage systems, such as directly on application servers or in storage area networks, to optimize performance.
AWS Webcast - Cost and Performance Optimization in Amazon RDSAmazon Web Services
Amazon RDS makes it easy to set up, operate, and scale relational databases in the cloud. The service offers a variety of options for optimizing the performance level delivered, as well as optimizing your spending. In this webinar, we will show a variety of techniques for implementing the right performance level for your application.
Learning Objectives:
• Understand the Amazon RDS options that change database performance and cost
• Select the appropriate performance and cost level for your specific application Who Should Attend:
• Technical Amazon RDS customers and prospective customers
Selecting the Right AWS Database Solution - AWS 2017 Online Tech TalksAmazon Web Services
• Get an overview of managed database services available on AWS
• Learn how to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on your use case
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be economical. We will cover how each service might help support your application and how to get started.
Getting Started with Managed Database Services on AWS - September 2016 Webina...Amazon Web Services
On AWS you can choose from a variety of managed database services that save effort, save time, and unlock new capabilities and economies. In this session, we make it easy to understand how they differ, what they have in common, and how to choose one or more. We'll explain the fundamentals of Amazon RDS, a managed relational database service in the cloud; Amazon DynamoDB, a fully managed NoSQL database service; Amazon ElastiCache, a fast, in-memory caching service in the cloud; and Amazon Redshift, a fully managed, petabyte-scale data-warehouse solution that can be surprisingly economical. We will cover how each service might help support your application, how much each service costs, and how to get started.
Learning Objectives:
• Overview of managed database services available on AWS
• How to combine them for high-performance cost effective architectures
• Learn how to choose between the AWS database services based on the use case
Who Should Attend:
• IT Managers, DBAs, Enterprise and Solution Architects, IT Managers, DBAs, Enterprise and Solution Architects, Devops Engineers and Developers
#MFSummit2016 Operate: The race for spaceMicro Focus
The Race for Space: File Storage Challenges and Solutions Facing escalating storage requirements? Being held to ransom by your vendors? Would secure, scalable, highly-available and cost-effective file storage that works with your current infrastructure help? Micro Focus and SUSE could help. Presenters: David Shepherd, Solutions Consultant, Micro Focus and Stephen Mogg, Solutions Consultant SUSE
2015 deploying flash in the data centerHoward Marks
Deploying Flash in the Data Center discusses various ways to deploy flash storage in the data center to improve performance. It describes all-flash arrays that provide the highest performance but also more expensive options like hybrid arrays that combine flash and disk. It also covers using flash in servers or as a cache to accelerate storage arrays. Choosing the best approach depends on factors like workload, budget, and existing infrastructure.
2015 deploying flash in the data centerHoward Marks
This document discusses deploying flash storage in the data center to improve storage performance. It begins with an overview of the performance gap between processors and disks. It then discusses all-flash arrays, hybrid arrays, server-side flash caching, and converged architectures as solutions. It provides details on flash memory types, form factors, and considerations for choosing a flash solution.
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
The document discusses a presentation about Ceph on all-flash storage using InfiniFlash systems to break performance barriers. It describes how Ceph has been optimized for flash storage and how InfiniFlash systems provide industry-leading performance of over 1 million IOPS and 6-9GB/s of throughput using SanDisk flash technology. The presentation also covers how InfiniFlash can provide scalable performance and capacity for large-scale enterprise workloads.
The document discusses Oracle's Zero Data Loss Recovery Appliance. It aims to fundamentally change how databases are protected by pushing database changes in real-time instead of periodic backups. This minimizes impact on production databases and ensures zero data loss. It stores database changes efficiently on disk and can restore databases to any point in time using these deltas. It also creates space-efficient "virtual" full backups without requiring full backups. This enables long retention of backup history with minimal storage.
The document discusses Oracle's database strategy with Oracle Database 11g. It aims to simplify IT infrastructure through consolidation, reducing costs and complexity. Key points include pooling resources for improved utilization, automated management for reduced support costs, and new capabilities for increased availability and adaptability to change.
The document discusses IBM Spectrum Scale, a software-defined storage solution from IBM. It provides:
1) A family of software-defined storage products including IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum Archive, IBM Spectrum Virtualize, IBM Spectrum Accelerate, and IBM Spectrum Scale.
2) IBM Spectrum Scale allows storing data everywhere and running applications anywhere. It provides highly scalable, high-performance storage for files, objects, and analytics workloads.
3) The document provides an overview of the IBM Spectrum Scale product and its capabilities for optimizing storage costs, improving data protection, enabling global collaboration, and ensuring data availability, integrity and security.
The document discusses data protection and disaster recovery. It describes traditional backups that can take days for recovery versus new technologies that enable recovery in hours. It discusses three components of business continuity: high availability, continuous operations, and disaster recovery. The key goals of business continuity planning are outlined. Traditional backup architectures and recovery metrics are depicted. Emerging technologies like snapshots, replication, and automation are discussed which improve recovery point objectives (RPO) and recovery time objectives (RTO). The document emphasizes that disaster recovery requires a holistic business solution approach involving people, processes, and technologies.
Introduction to MariaDB. Covers the history of Structured Query language, MySQL and MariaDB, shows how to install on Windows, Mac or Linux desktop, and practical examples.
IBM is announcing new storage products and updates for 1Q20:
- The Storwize and FlashSystem families will be consolidated under a single FlashSystem brand with common software.
- New FlashSystem models include the FlashSystem 5010, 5030, 5100, 7200, 9200 and 9200R spanning from entry-level to high-end storage.
- A webinar on February 11th will provide more details on IBM's storage solutions for hybrid multicloud environments.
IBM Spectrum Copy Data Management provides software-defined copy data management to automate data protection, enable self-service access for testing and development, and optimize storage utilization through space-efficient data copies. It catalogs and automates snapshot creation, replication, provisioning access to copies, refresh of copies, and deletion of copies. This helps organizations transform their infrastructure, improve efficiency, and empower different teams with self-service access to data.
This document provides guidance on organizing and delivering effective PowerPoint presentations. It discusses identifying the audience and goal, structuring the presentation, using visual elements like images and charts, and rehearsing. The document recommends determining requirements, using structures like AIDA or SCI-PAB, applying the "five C's" of concise yet compelling content, and practicing presentations out loud. It also offers tips for the actual presentation, including handling questions and closing strongly. The overall message is that preparation, visual storytelling and rehearsal are key to engaging audiences successfully.
IBM Z Pervasive Encryption provides transparent encryption of data at rest through z/OS data set encryption without requiring application changes. Key steps to get started include generating an encryption key and key label stored in the CKDS, configuring RACF to use the key label, allowing the secure key to be used as a protected key, granting access to the key label, and associating the key label with data sets by altering the RACF DFP segment or assigning to a DFSMS data class.
This document provides an overview and agenda for the 2019 Top IT Trends presented at the 2019 IBM Systems Technical University. The agenda covers emerging technologies including Internet of Things (IoT), big data analytics, artificial intelligence, containers and orchestration, blockchain, and hybrid multicloud. For each technology, key concepts and considerations are discussed at a high level.
This document provides tips for building a personal brand through blogging and social media from Tony Pearson, an experienced blogger at IBM. It begins with an introduction to Tony Pearson and his experience as a top blogger at IBM, including being ranked #1 on the IBM developerWorks blog list. The document then discusses the difference between brands and reputations and the benefits of developing a strong personal brand through social media, such as growing your professional network and opportunities. It provides 12 tips for blogging and social media content creation, including reading the book "Naked Conversations" and treating blog posts as works of art.
This document provides an overview of a training session on storage and the Data Facility Storage Management Subsystem (DFSMS) for z/OS. The training will cover z/OS storage fundamentals, storage systems for z/OS including disk drives, tape drives, and the IBM DS8000 family of storage systems. It will also cover the DFSMS software which manages storage hierarchies and the movement of data between online, nearline, and offline storage devices. Attendees must complete 9 of the 12 listed lectures and all required lab exercises to earn a certificate.
IBM Z Pervasive Encryption provides transparent encryption of data at rest through z/OS data set encryption. It allows encryption of data without requiring application changes by encrypting data sets at the storage level using encryption keys managed by IBM Z cryptographic hardware and software. Administrators can implement encryption by generating keys, configuring access controls and policies to associate encryption keys with data sets. The encryption protects data while allowing full access and management of the encrypted data sets.
The document provides an overview of storage fundamentals for z/OS systems, including:
- Storage hierarchies with different tiers like cache, DASD, tape, and how they are used.
- Common storage technologies like disk, flash, and tape, how they work, and performance metrics.
- Storage systems like IBM DS8000 that provide arrays of disk and flash with features like RAID and Easy Tier automated data placement.
- The role of tape storage in archives and backups despite perceived notions, as it remains the most cost effective and reliable solution.
- IBM Spectrum Scale can run workloads in various public clouds like Amazon Web Services (AWS) and future support for Google Cloud Platform. It can tier data between on-premise and various cloud platforms.
- The session will describe how Spectrum Scale can be deployed and consumed in clouds today through fully managed and custom solutions. It will also cover how to connect on-premise Spectrum Scale installations to clouds for hybrid cloud capabilities.
- Spectrum Scale on AWS is available through AWS Marketplace. It allows users to deploy their own Spectrum Scale cluster on AWS infrastructure with various configuration options through CloudFormation templates.
IBM Storage for AI and Big Data provides scalable and high-performing storage solutions to address the top challenges of data volume, data management skills gaps, and storage performance for AI workloads. It offers a unified storage platform from data ingest through insights with software-defined storage that can scale from small proof-of-concept projects to large production deployments. Key products include IBM Elastic Storage Server (ESS) and IBM Spectrum Scale software-defined storage.
This document provides tips for building a personal brand through blogging and social media from Tony Pearson, an experienced IBM blogger. The document begins with an introduction of Tony Pearson and his experience as a top IBM blogger. It then discusses the difference between brands and reputation and the benefits of developing a strong personal brand through social media influence. The document outlines 12 tips for effective blogging and social media content creation, including reading recommended books, treating blog posts like works of art, using social bookmarking, mind mapping, choosing post structures, using catchy titles, writing conversationally, maintaining a regular blogging schedule, contributing value, and identifying relationships to topics discussed. The overarching message is that developing an authentic personal brand through quality social
This document discusses IBM storage technologies including IBM Storwize, SAN Volume Controller, and IBM Spectrum Virtualize. It provides an overview of these products, how they virtualize storage, and their key features such as thin provisioning, data reduction, Easy Tier automated storage tiering, remote copying, and active-active configurations. The document is intended for an audience at the 2019 IBM Systems Technical University in Lagos, Nigeria.
The document provides an overview of the IBM DS8000 storage system and its capabilities for data protection and cyber resiliency. Some key points:
- The DS8000 offers balanced performance, reliability, scalability, and flexibility for critical enterprise storage needs.
- It provides modern data protection features like data encryption, thin provisioning, and IBM Database Protection.
- The system is designed for cyber resiliency with functions that optimize caching, prefetching, and data placement to improve I/O performance.
This document provides tips and best practices for public speaking from Tony Pearson, an experienced IBM professional. It covers gathering requirements such as understanding the audience and goals. It also discusses researching content, rehearsing, and structuring presentations with an engaging opening, middle, and closing. Specific tips include varying speech, using humor, handling questions, and recommending books on public speaking. The overall message is that with proper preparation, practice, and following best practices, presentations can be successful and audiences can be informed or persuaded.
This document provides guidance on building powerful PowerPoint presentations. It discusses gathering requirements such as audience, location, purpose and time constraints. It recommends determining an appropriate structure such as AIDA (Attention, Interest, Desire, Action) or SCIPAB (Situation, Complication, Implication, Position, Action, Benefit). The document covers filling slides with concise, consistent content that conveys the message through pictures, charts and text placement. It emphasizes clean design with one idea per slide and proper use of colors, fonts, transitions and builds. The goal is to design slides that tell a story and deliver the intended message.
The document provides tips from Tony Pearson on building a personal brand through blogging and social media. Tony Pearson is introduced as an experienced blogger for IBM who has ranked as the top IBM developerWorks blogger. The presentation agenda includes defining personal brand and reputation, benefits of personal branding, and 12 tips for blogging and social media content. Key tips discussed are reading the book "Naked Conversations" for blogging best practices and treating blog posts as works of art to entertain and inform readers.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
S016828 storage-tiering-nola-v1710b
1. IBM Power Systems
and IBM Storage
Technical University
New Generation of
Storage Tiering:
Simpler Management,
Lower Costs and
Increased Performance
Tony Pearson
Master Inventor and Senior IT Architect,
IBM Corporation
2. Abstract
Confused on how to implement storage tiering between
Flash, Disk, Tape storage system resources?
This session will cover the various techniques and
technologies available.
2
3. This week with Tony Pearson
Day Time Topic
Monday 10:15am
Business Continuity – The seven tiers of business
continuity and disaster recovery
1:45pm IBM’s Cloud Storage Options
4:30pm
Introduction to IBM Cloud Object Storage System
and its Applications (powered by Cleversafe)
Tuesday 10:15am
The Pendulum Swings Back – Understanding
Converged and Hyperconverged Environments
11:30am
New generation of storage tiering: Simpler
management, Lower costs and Increased
performance
3:15pm
Introduction to IBM Cloud Object Storage System
and its Applications (powered by Cleversafe)
Wednesday
9:00am
IBM Spectrum Scale for Volume, File and Object
Storage
3
4. Is your data on the right storage tier?
• Requirements change over time
• Data owners are risk averse
• Users don’t see the total cost
• Rationing resources is unpopular
… but the biggest challenge has been:
• No objective way to determine what the
‘right tier’should be!
…so data stays on top tier storage
(expensive)
• Resources that should be spent on
innovation are wasted on infrastructure
inefficiencies
50-60%
Optimal Storage Tier Distribution
Tier 0
Tier 2
Tier 3
Tier 1
20-25%
15-20%
1-5%
-0 -1%
Typical Storage Tier Distribution
70%
Tier 0
Tier 2
Tier 3
Tier 1
4
5. Too Many Critical Storage Projects and…
No Budget to Implement
Top Storage Pain Points Top Storage Projects
How do I fix
these
problems? How do I fund
these
projects?
5
6. Storage Tiers –
A trade-off between performance and cost
Server
Cache, Flash
and Solid-State Drives
Hard Disk Drives
Automated Tape
Manual Tape
Faster
Performance
Lower
Cost
Technologies allow us to
place and move data to the
appropriate storage tier to
balance between
performance and cost
6
7. Four Fundamental Truths of Storage Tiering
• All data is not created equal
• Information changes in business
value and in service level
requirements over time
• IT resources should be allocated
according to the value of information
• Information must be managed throughout
its entire lifespan …
data outlives media
0
20
40
60
80
100
1 w 2 w 3 w 4 w 3 m 6 m 9 m 1 yr 2 yr 5 yr 10 yr
DataValue
Age of Data
Machine data
Email
EMR
Database
Surveillance Video
7
8. Tiered Information Environment
A tiered information environment aligns IT resources with
Business Value and Service Levels required
Best Practices
1. Align information with business
requirements to determine 3-5
Information Classes
2. Establish policies to map
information to a Class of Service
• Initial placement
• Subsequent movement
• Backups, archives, mirroring
• Disposal, destruction, deletion
3. Establish well differentiated
tiers of information infrastructure
associated with each service level
Mission
Critical
Business
Critical
Business
Operational
General
Business
app/
data type
Platinum Gold Silver Bronze
Infrastructure Classes
of Service
Information
Management
Characteristics
Client SW: Backup, Compliance, SRM, Storage Access, Replication
Device Mgmt SW: SAN hardware, Storage Arrays
Storage Virtualization
Storage Hardware: Disk, Tape, Storage Networking
Infrastructure / Tactical
Components
Policies & Governance
app/
data type
app/
data type
app/
data type
app/
data type
app/
data type
Policies / ISSC / Information Management
8
9. Introducing I/O Density – Performance
measurement
9
• For each LUN, the amount of IOPS is divided by
the amount of resident data
• IOPS = I/O (read/write) per second
• IO Density = IOPS per Terabyte of data for a given
volume
• What is I/O Density?
• I/O Density (IOPS/TB) provides a level-set view of
performance regardless of volume size allowing a
uniform unit of measurement for analysis
• IOPS and TB tend to grow at similar rates, keeping
IO Density constant for each application
• The I/O Density value is the peak value of the averages
taken for the hour or day, depending on the sampling
chosen (Daily, Hourly or Sample Average).
• Daily Average has thus far proven the most reliable
indicator of future re-tiering results
• Hourly Average useful for brief but intense high-demand
workloads
*
Note:
I/O Density can be
represented per TB
or per GB
700 IOPS/TB = 0.7 IOPS/GB
10. The Many Forms of Tiering – Single System
Optimization
Low-latency DRAM
and Flash
Solid State
Drives (SSD)
Enterprise Disk
(15K and 10K)
Nearline Disk
(7200 RPM)
Automated Tape
Libraries
FlashSystem V9000
with External Disk
DS8000 Disk System
SAN Volume Controller
Storwize
$$$
10
11. I/O Density for Different Disk Technologies
1000
700
500
100
10
1
146GB
15K
300GB
15K
600GB
10K
900GB
10K
3 TB
7200
4 TB
7200
73GB
15K
I/O Density
(IOPS / TB)
Spinning disks get larger
in capacity, but the overall
IOPS per spindle remains
constant, causing lower
I/O density
* Typical values, drives may vary
RPM IOPS/drive
15K 175-210
10K 125-150
7200 75-100
11
12. Automated Sub-LUN Tiering within Storage Array
Flash and
Solid-State Drives
Enterprise HDD
15K and 10K rpm
Nearline HDD
7200 rpm
Problem:
SSDs appear more expensive than traditional disks (per GB)
Without optimization tools, clients have been over-provisioning them
Administrators spend too much time monitoring, reporting, and tuning
tiers
Solution:
Three data relocation functions
that enable smart data
placement and movement to
optimize SSD deployments with
minimal costs
– Entire-LUN Relocation
– Sub-LUN Automatic
Movement
– Re-balancing Intra-Tier Extent
Pool
12
13. IBM Easy Tier®
• Pools can have mixed media
– Tier 0 (High Performance) Flash
– Tier 1 (High Capacity, Read-Intensive)
Flash
– Enterprise HDD (15K and 10K RPM)
– Nearline HDD (7200 RPM)
• Easy Tier measures and manages activity
– 24 hour learning period
– Every five minutes: up to 8 extents moved
• Hottest Extents moved up to faster tiers
• Coldest Extents moved down to slowest tiers
– New allocations placed initially on fastest HDD
• A small amount of Flash (as little as 2-3%) can
dramatically reduce response times and increase
IOPS throughput
• Storage Tier Advisory Tool can estimate
benefits of adding Flash before purchase!
Flash and SSD
Enterprise HDD
Nearline HDD
13
14. Flash and Solid State Drive Options – Tier 0 vs Tier 1
Tier 0 (High performance)
• Due to repetitive ‘write-erase’ cycles,
flash drives have a life span or ‘write-
endurance’
• Most current “write intensive" flash
drives have endurance that allows up
to 10-25 Drive Writes per Day
(DWPD)
• For example, with a 700GB flash
drive, you can write up to 7 TB on
that one drive in a day and
maintain the usable capacity for 5
years
Tier 1 (High Capacity, Read-Intensive)
• Flash drive vendors have engineered
a lower cost drive, qualified for up to
1-5 DWPD
• Often called “read intensive” (RI)
flash drives
• One DWPD drives can meet the vast
majority of workload demands
• IBM and competitors are
aggressively moving to all-flash
arrays with extensive use of read
intensive drives for competitive
reasons
Original (1050 GB)
Tier 0 – Sold as 700GB with extra … 350GB for 10 DWPD
Tier 1 – Sold as 1000GB …with only 50GB extra for 1 DWPD
14
15. Workload skew from different client environments
0
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 100
Percentage of extents
ComulativePercentageofActivity
Mainframe1 - Small IOs Mainframe1 - MB Mainframe2 - Small IOs Mainframe2 - MB
Mainframe 3 - Small IOs Mainframe3 - MB Open1 - Small IOs Open1 - MB
Open2 - Small IOs Open2 - MB Open3 - Small IOs Open3 - MB
Nearline
Enterprise
SSD
Source: Internal IBM lab tests
15
16. Easy Tier Application Transaction Improvement
16
Application
Transactions
Easy Tier
Learning
Easy Tier
In Action
240% from
Original
brokerage
transaction
No change to
the database
or application
No work
required to
identify active
indexes or
I/O profiles
No manual
movement of
files or
volumes
Just turn it on
and let it
work!
16
17. Flash only
pool
Easy Tier
pool
HDD only
pool
ERP
FlashSystem
Storwize / XIV /
DS8000
Spectrum Virtualize
SCM SRM CRM BW
Production DB servers
Non-prod DB
servers
ERP SCM … … …
• Put DB with a high IO/s per TB
ratio on Flash only
• Put Production DB
on Easy Tier™ tiered storage
• Put Non-production DBs
on HDD only
Spectrum Virtualize storage pools
Best Practices – Three Pools
17
18. The Many Forms of Tiering – Datacenter Optimization
Low-latency DRAM
and Flash
Solid State
Drives (SSD)
Enterprise Disk
(15K and 10K)
Nearline Disk
(7200 RPM)
Automated Tape
Libraries
$$
$$$
IBM Spectrum Control
IBM Spectrum Virtualize
Datacenter
Optimization
18
19. Realize Cost-Savings through
Right-Tiering of Data Storage
19
Problem:
High-end disk arrays are expensive
Difficult to identify which data should be
moved
Manually re-locating LUNs is time-
consuming and disruptive
20. IBM Intelligent ILM Implementation Phases
1. Understand Client Data
2. Implement Tiering & Lifecycle Policies
TapeArchiveTier 3Tier 2Tier 1
3. Automate Lifecycle Management
• Analyze your data usage
patterns and provides
recommendations on how to
cost-effectively store your data
using storage tiers
• Define and implement storage
tiers with policies on where to
place your data initially and when
to move it based on its changing
business value
• Automate the movement of
your data, without disruption or
downtime, to lower cost storage
tiers based on pre-defined
policies and your business
priorities
20
21. Where Does Your Data Belong?
1000
700
500
100
10
1
I/O Density
(IOPS / TB)
Different data have
different I/O densities
Intelligent Information
Lifecycle Management
(IILM) Identifies the I/O
density of your existing
data to help relocate
data to more cost
effective storage
4%
Tier 0
>1000
2%
Tier 1A
700-1000
3%
Tier 1B
500-700
20%
Tier 2
100-500
42%
Tier 3
10-100
24%
Archive
<10
5%
Inactive
* Typical percentages, client data may vary
21
22. 1. Understand Your Data –
Intelligent Information Lifecycle Management (IILM) services
Intelligent Information Lifecycle Management
(IILM) - simulates savings by analyzing historical
data usage patterns in existing environment
•Using IBM Spectrum Control or similar tools,
the IBM team can identify the volumes and TB
that will be analyzed over period of 30 days
– IOPS average will be based on calculations
using 96-288 samples taken for each
volume throughout a 24-hour period.
– Often, IBM team finds volumes of data that
were totally inactive during the analyzed
period. This is not unknown access, but
known zero access
•Based on this performance data collected from
the storage environment, the analysis
determines the average Indicative Tier
Distribution for the volumes
Tier
IOPS/
TB
Daily
TB
Daily
%
Hourly
TB
Hourly
%
Tier 0 > 1000 17.61 1% 102.25 4%
Tier 1a 700-1000 11.31 0% 40.75 2%
Tier 1b 550-700 19.08 1% 61.87 3%
Tier 2 100-500 225.95 10% 473.55 20%
Tier 3 10-100 1147.23 50% 1010.86 42%
Archive <10 763.10 33% 579.83 24%
Inactive 0 120.42 5% 123.65 5%
Indicative Tier Distribution
Tier 0 Tier 1a
Tier 1b Tier 2
Tier 3 Nearline
Inactive
22
23. 2. Implement Tiering & Lifecycle Policies
23
Purchase
additional Tier 0
(Cache, Flash, SSD)
for most demanding
I/O densities
Purchase
additional Tier 2 and 3
for less demanding
I/O densities
24. IBM Virtual Storage Center (VSC)
Optimize Your ResourcesOptimize Your Resources
Automate Your WorkloadsAutomate Your Workloads
Simplified
Management
Simplified
Management
ComputeCompute StorageStorage NetworkNetwork
APIs
Orchestration
Service Levels
Standard Interfaces
Provisioning
Virtualization
Control
Plane
Data
Plane
SAN
Storage Virtualization
IBM Spectrum Virtualize
•IBM SAN Volume Controller
•IBM Storwize V7000 / V5000
•IBM FlashSystem V9000
Snapshot
Data Protection
Storage Optimization,
Provisioning and
Transformation
Infrastructure
Resource Management
IBM Spectrum Control
•Data and storage management
•Storage analytics engine
One or
more
24
25. IBM Spectrum Control and Virtual Storage Center
IBM Virtual
Storage Center
• Storage Analytics
• Policy-based
Automation
• Service level
provisioning
IBM Spectrum
Control
Advanced
Edition
IBM
Spectrum
Control
Base Edition
• VMware
IBM
Spectrum
Virtualize
IBM Spectrum
Control
Standard
Edition
• Capacity Planning
and Provisioning
• Performance
Monitoring and Alerts
IBM Copy Services
Manager
Base Edition
IBM
Spectrum
Snapshot
Standard
Edition
IBM
Spectrum
Snapshot
Standard
Edition
IBM Spectrum
Control
Storage
Insights
• Reclaim space
• Optimize data
placement
• Monitor capacity
and performance
On-premises
Off-premises
25
26. 3. Automate Lifecycle Policies
SSD
Ent
HDD
NL
HDD
SSD
Ent
HDD
NL
HDD
NL
HDD
SAN
Solution:
IBM Spectrum Virtualize™ can manage
hundreds of arrays (IBM and non-IBM)
IBM Spectrum Control Advanced Edition
recommends and performs up-tier and down-
tier moves based on I/O Density and Age of
the data
Move LUNs non-disruptively within and across
arrays
26
27. IBM Global Account (IGA)
As part of a technology refresh, IBM internally
transformed its heavily Tier 1 environment to a
4-tier cost-effective infrastructure
– IGA realized $17 million in USD cost savings
in 2012, primarily through CAPEX avoidance
– IBM realized $90 million in savings over five
years
– “We were able to reduce a multi-day
complex process to a matter of 2-3
hours!”
— Kris Myers. Dir. Information Technology
IBM Global Account Division
IBM VSC managing the following tiers:
– Tier 1A = DS8000 w/ 15K RPM and SSD
– Tier 1B = XIV w/ 7200 RPM disk
– Tier 3 = V7000 w/ 2TB 7200 RPM
Data that is most active remains on Tier 1,
while data with lower activity is moved down
to lower tiers, consuming less costly storage
capacity on Tier 3
Case Study: IBM Global Account
USD $17M cost savings in 2012
• Costs savings over $90M over 5 years
27
28. IBM Sweden Shared Environment
Sweden has begun transforming its Tier 1 only
environment to a 3-tier cost-effective
infrastructure to deliver tiered solution to its
existing and prospective clients
Customer #1: IBM Sweden Internal Data
– Savings of 15M Swedish Krona (SEK) in
estimated cost savings over next 5 years
Customer #2: Large Fertilizer Business
– Savings of SEK 12M in estimated cost
savings over next 5 years
Estimated cost savings for existing shared
clients
– Savings of SEK 50M in estimated cost
savings over next 5 years for 12 existing
shared clients
IBM Sweden Shared Tiered Architecture
– Tier 1A = DS8K w/ 300GB 15K RPM drives
– Tier 1B = DS8K w/ 600GB 10K RPM
– Tier 3 = V7000 w/ 3TB 7.5K RPM
SEK 50M cost savings
estimated over 5 years
Case Study #2: IBM Sweden Shared Environment
28
29. The Many Forms of Tiering – Global Optimization
Low-latency DRAM
and Flash
Solid State
Drives (SSD)
Enterprise Disk
(15K and 10K)
Nearline Disk
(7200 RPM)
Automated Tape
Libraries
$$$
$$
$
IBM Spectrum Scale
IBM Spectrum Archive
Global
Optimization
29
30. IBM Spectrum Scale – Flexible File and Object Storage
FS1 FS256. . .
Exabyte-Scale,
Global Namespace
One big file system or divide into as
many as 256 smaller file/object
systems
Each file system can
be further divided into
fileset containers
Network Shared Disk
(NSD) refers to:
• Flash and Disk
devices
• Servers connected to
these devices
• Protocol between
clients and servers
Metadata can be separated to
its own Pool or intermixed with
data
Files and objects
can be migrated to
Tape, Object store,
or Cloud
30
31. IBM Spectrum Scale™ – Supported Topologies
Twin-tailed
FCP, iSCSI, IB
Internal, Direct-Attach
Shared PoolsShare-Nothing Pools
NSD Servers
Access files on direct, twin-
tailed or SAN attached disk
OpenStack drivers
Can be enabled as
“Protocol Nodes”
File Placement
Optimization (FPO)
Servers
For AIX, Linux-x86
and Linux on POWER
Access files on direct
attached disk
Exports files to other
FPO servers
Hyperconverged
External Clients
Access data via NAS, HDFS and
object protocols over IP network
TCP/IP
NSD Clients
For Linux, AIX,
and Windows
Access files via
SAN, TCP/IP or
RDMA
TCP/IP or RDMA network
31
32. IBM Spectrum Scale™ – More than just a file system!
Remote Office/
Branch Office
Other NAS
Other
Datacenters
Scale
Active File
Management
(AFM) caches data
to where it is
needed, can be
used to migrate from
other NAS
Hierarchical Storage
Management (HSM) migrates
infrequently accessed files to tape or
object-based cloud, automatically
recalls back when accessed
Local Read-Only Cache (LROC)
and Highly Available Write Cache
(HAWC) caches the busiest blocks of
files on local flash
Disaster Recovery
(DR) remotely mirrors data
to remote locations
Active/active
Migrate/Recall
Tape
NSD Client
Information Lifecycle
Management (ILM) moves data
across tiers of flash and disk
Object
Cloud
Cloud
32
33. I/O Density for Different Disk Technologies
Automated
Tape library
Shelf Tape
(on-premises)
Shelf Tape
(off-premises)
As data ages, it is
accessed less frequently
It can be cost-effective to
move older data to
physical tape media
1000
700
500
100
10
1
I/O Density
(IOPS / TB)
33
34. Library Edition
Linux or Windows Server
Tape Library
NFS / SMB
Linux Etc.Archive
Management
Solutions
Application file
access to tape
IBM Spectrum Scale
File system
Single Drive Edition
LTFS Format Enablement
Single Drive Support
Library Edition
Digital Archive Enablement
Tape Automation Support
Enterprise Edition
Integrated Tiered
Storage Solutions
Application file access
to tiered storage
Tape Library 1 Tape Library n
IBM Spectrum Archive – Implementations
…
NSD
NFS/SMB
Object
POSIX
Hadoop
34
35. IBM Spectrum Archive™ Overview
IBM Spectrum Archive enables IBM tape libraries to read/write
LTFS-format tapes in an IBM Spectrum Scale™ environment
–Based on the integration of IBM Spectrum Scale™ and LTFS
format
–Supports LTFS-enabled libraries
and drives
•TS4500 and TS3500 Enterprise libraries
•TS4300, TS3310, TS3200, TS3100, TS2900 libraries
•TS1140 (or higher) Enterprise Drive
•LTO5 (or higher) Ultrium drive
–Integrated functionality with
IBM Spectrum Scale
•Supports Policy based migrations
•Seamless DMAPI usage
•Data replication to multiple pools
–Supports scale-out for capacity and I/O
•Seamless cache controls between
IBM Spectrum Archive nodes
•Tape drive performance balancing
•Multiple node performance balancing
Los Angeles London Tokyo
Clients
Wide Area Network (WAN)
Global Namespace
LTFS LTFS LTFS LTFS
35
36. Three ways to move cold data out of IBM Spectrum
Scale
Migrate/Recall
LTFS
Tape
Information Lifecycle
Management (ILM) moves
data across tiers of flash and
disk
Migrate/Recall
Migrate/RecallTransparent
Cloud Tiering
Other NAS
AFM
IBM Cloud Object
Storage System
IBM Cloud
36
37. IBM Spectrum Storage and
IBM Cloud Object Storage System
Unified file and object
storage. Optimized for
high performance, across
flash, disk and object
store
Flash
Object
Store
15K
Object storage on disk
( File, backup and archive interfaces
available through variety of options )
IBM Cloud
Amazon Web Services
Microsoft Azure
Swift S3 emulation
OpenStack Swift
Unified file and object
storage on tape
Transparent Cloud Tiering
Information Lifecycle
Management (ILM) across tiers
HighestPerformance
Lowest cost
Tape10K 7200
37
38. Session summary
Tiered storage helps to balance
between performance and costs
• IBM Easy Tier can help optimize data
placement within a single system of
flash, enterprise and nearline disk
• IBM Spectrum Control and IBM
Spectrum Virtualize can help optimize
data placement across many different
flash and disk arrays in the datacenter
• IBM Spectrum Scale and IBM
Spectrum Archive can optimize data
placement globally, across multiple
datacenter locations, for data stored on
flash, disk and tape
38
42. IBM Tucson Executive Briefing Center
• Tucson, Arizona is
home for storage
hardware and
software design and
development
• IBM Tucson
Executive Briefing
Center offers:
– Technology
briefings
– Product
demonstrations
– Solution workshops
• Take a video tour!
– http://youtu.be/CXr
poCZAazg
https://www.ibm.com/it-infrastructure/services/client-centers
ccenter@us.ibm.com
42
43. About the Speaker
43
Tony Pearson is a Master Inventor and Senior IT Architect for the IBM Storage product line. Tony joined IBM Corporation in
1986 in Tucson, Arizona, USA, and has lived there ever since. In his current role, Tony presents briefings on storage topics
covering the entire IBM Storage product line, IBM Spectrum Storage software products, and topics related to Cloud Computing,
Analytics and Cognitive Solutions. He interacts with clients, speaks at conferences and events, and leads client workshops to
help clients with strategic planning for IBM’s integrated set of storage management software, hardware, and virtualization
solutions.
Tony writes the “Inside System Storage” blog, which is read by thousands of clients, IBM sales reps and IBM Business Partners
every week. This blog was rated one of the top 10 blogs for the IT storage industry by “Networking World” magazine, and #1
most read IBM blog on IBM’s developerWorks. The blog has been published in series of books, Inside System Storage: Volume
I through V.
Over the past years, Tony has worked in development, marketing and consulting for various storage hardware and software
products. Tony has a Bachelor of Science degree in Software Engineering, and a Master of Science degree in Electrical
Engineering, both from the University of Arizona. . Tony is an inventor or co-inventor of 19 patents in the field of electronic data
storage.
9000 S. Rita Road
Bldg 9032 Floor 1
Tucson, AZ 85744
+1 520-799-4309 (Office)
tpearson@us.ibm.com
Tony Pearson
Master Inventor
Senior IT Architect
IBM Storage
46. Notice and disclaimers continued
Information concerning non-IBM products was obtained from the
suppliers of those products, their published announcements or
other publicly available sources. IBM has not tested those
products in connection with this publication and cannot confirm
the accuracy of performance, compatibility or any other claims
related to non-IBM products. Questions on the capabilities of
non-IBM products should be addressed to the suppliers of those
products. IBM does not warrant the quality of any third-party
products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM expressly disclaims all
warranties, expressed or implied, including but not limited
to, the implied warranties of merchantability and fitness for
a particular, purpose.
The provision of the information contained herein is not intended
to, and does not, grant any right or license under any IBM
patents, copyrights, trademarks or other intellectual
property right.
IBM, the IBM logo, ibm.com, AIX, BigInsights, Bluemix, CICS,
Easy Tier, FlashCopy, FlashSystem, GDPS, GPFS,
Guardium, HyperSwap, IBM Cloud Managed Services, IBM
Elastic Storage, IBM FlashCore, IBM FlashSystem, IBM
MobileFirst, IBM Power Systems, IBM PureSystems, IBM
Spectrum, IBM Spectrum Accelerate, IBM Spectrum Archive,
IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum
Scale, IBM Spectrum Storage, IBM Spectrum Virtualize, IBM
Watson, IBM z Systems, IBM z13, IMS, InfoSphere, Linear
Tape File System, OMEGAMON, OpenPower, Parallel
Sysplex, Power, POWER, POWER4, POWER7, POWER8,
Power Series, Power Systems, Power Systems Software,
PowerHA, PowerLinux, PowerVM, PureApplica- tion, RACF,
Real-time Compression, Redbooks, RMF, SPSS, Storwize,
Symphony, SystemMirror, System Storage, Tivoli,
WebSphere, XIV, z Systems, z/OS, z/VM, z/VSE, zEnterprise
and zSecure are trademarks of International Business
Machines Corporation, registered in many jurisdictions
worldwide. Other product and service names might
be trademarks of IBM or other companies. A current list of
IBM trademarks is available on the Web at "Copyright and
trademark information" at:
www.ibm.com/legal/copytrade.shtml.
Linux is a registered trademark of Linus Torvalds in the United
States, other countries, or both. Java and all Java-based
trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
46