4. Background As more and more digital devices(e.g. PC, laptop, ipad and smart phone) connect to the Internet, massive amount of new data are created on the web There were 5 exabytes of data online in 2002, which had risen to 281 exabytes in 2009, and the online data growth rate is rising faster than Moore's Law Then, how to store and manage these massive data effectively and efficiently ? An natural approach: Distributed Storage System!
5. Traditional Storage Architecture Direct Attached Storage(DAS) - huge management burden - limited number of connected host - severely limited data sharing Fabric Attached Storage - central system serves data to connected hosts - hosts and devices interconnected through Ethernet or Fibre Channel - NAS & SAN
6. FAS Implementations Network Attached Storage(NAS) - file-based storage architecture - data sharing across platforms - file sever can be the bottleneck Storage Area Networks(SAN) - scalable performance, high capacity - limited ability of sharing data - unreliable security Since the traditional storage architectures can not satisfy the emerging requirement well, novel approaches need to be proposed !
8. Storage Virtualization Definitions of storage virtualization by SNIA - the act of abstracting, hiding, or isolating the internal functions of a storage (sub)system or service from applications, computer servers, or general network resources for the purposes of enabling application and network independent management of storage or data - The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity, or adding new capabilities to lower-level storage resources Simply speaking, storage virtualization aggregates storage components, such as disks, controllers, and storage networks, in a coordinated way to share them more efficiently among the applications it serves!
9. Charactristics of ideal solution A good storage virtualization solution should: Enhance the storage resources it is virtualizing through the aggregation of services to increase the return of existing assets Not add another level of complexity in configuration and management Improve performance rather than act as a bottleneck in order for it to be scalable. Scalability is the capability of a system to maintain performance linearly as new resources (typically hardware) are added Provide secure multi-tenancy so that users and data can share virtual resources without exposure to other users’ bad behavior or mistakes Not be proprietary, but virtualize other vendor storage in the same way as its own storage to make the management seamless.
10. Types of Storage Virtualization Modern storage virtualization technologies can be implemented in three layers of the infrastructure In the server, some of the earliest forms of storage virtualization came from within the server’s operating systems In the storage network, network-based storage virtualization embeds the intelligence of managing the storage resources in the network layer In the storage controller, controller-based storage virtualization allows external storage to appear as if it’s internal
11.
12. It does not require additional hardware in the storage infrastructure, and works with any devices that can be seen by the operating system.
13. Although it helps maximize the efficiency and resilience of storage resources, it’s optimized on a per-server basis only.
14. The task of mirroring, striping, and calculating parity requires additional processing, taking valuable CPU and memory resources away from the application.
15. Since every operating system implements file systems and volume management in different ways, organizations with multiple IT vendors need to maintain different skill sets and processes, with higher costs.
19. The virtualization devices are typically servers running system software and requiring as much maintenance as a regular server.
20. The I/O can suffer from latency, impacting performance and scalability due to the multiple steps required to complete the request, and limited to the amount of memory and CPU available in the appliance nodes.
21. Decoupling the virtualization from the storage once it has been implemented is impossible because all the meta-data resides in the appliance, thereby making it proprietary.
22.
23. Complexity is reduced as it needs no additional hardware to extend the benefits of virtualization. In many cases the requirement for SAN hardware is greatly reduced.
26. Interoperability issues are reduced as the virtualized controller mimics a server connection to external storage.Although a few downsides to controller-based virtualization exist, the advantages not only far outweigh them but they also address most of the deficiencies found in server- and network based approaches.
28. Motivation of Object Storage Improved device and data sharing - platform-dependent metadata moved to device Improved scalability & security - devices directly handle client requests - object security Improved performance - data types can be differentiated at the device Improved storage management - self-managed, policy-driven storage - storage devices become more autonomous
29. Objects in Storage The root object -- The OSD itself User object -- Created by SCSI commands from the application or client Collection object -- A group of user objects, such as all .mp3 Partition object -- Containers that share common security and space managementcharacteristics P4 P3 P2 OSD P1 Root Object (one per device) Partition Objects U1 User Data Collection Objects Metadata Attributes User Objects(for user data) Object ID
30. Object Storage Device Two changes - Object-based storage offloads the storage component to the storage device - The device interface changes from blocks to objects Applications Applications System call interface System call interface File system user component File system user component File system storage component Object interface File system storage component Block interface Block I/O manager Block I/O manager Storage device Storage device Traditional model OSD model
31. Object Storage Architecture Summary of OSD Key Benefits ■ Better data sharing – Using objects means less metadata to keep coherent, which makes it possible to share the data across different platforms. ■ Better security – Unlike blocks, objects can protect themselves and authorize each I/O. ■ More intelligence – Object attributes help the storage devices learn about its users, the applications and the workloads. This leads to a variety of improvements, such as better data management through caching. Active disks can be implemented on OSDs to implement database filters. An intelligent OSD can also continuously reorganize the data, manage its own backups and deal with failures.
32. Lustre Lustre (Linux + Cluster) - first open sourced system with object storage - a massively parallel distributed file system - consist of clients, MDS and OST - used by fifteen of the top 30 supercomputers in the world A single metadata server (MDS) that has a single metadata target (MDT) per Lustrefilesystem that stores namespace metadata, such as filenames, directories, access permissions, and file layout. Client(s)that access and use the data, concurrent and coherent read and write access to the files are allowed One or more object storage servers (OSSes) that store file data on one or more object storage targets (OSTs)
33. Ceph Ceph is a distributed file system that provides excellent performance, reliability, and scalability based on object storage devices Metadata Cluster store the cluster map and control the data placement, higher-level POSIX functions (such as open, close, and rename) are managed.
34. Panasas Panasas (Panasas, Inc.) - consist of OSD, Panasas File System, MDS - claim to be the world's fastest HPC storage system
36. Distributed File System A distributed file system or network file system is any file system that allows access to files from multiple hosts sharing via a computer network(Wikipedia) The history - 1st generation(1980s): NFS, AFS - 2nd generation(1990~1995): Tiger Shark, Slice File System - 3rd generation(1995~2000): Global File System, General Parallel File System, DiFFs, CXFS, HighRoad - 4th generation(2000~now): Lustre, GFSm, GlusterFS, HDFS Performance Scalability Reliability Availability Fault-tolerant
37. Google File System(GFS) GFS is a scalable distributed file system for large distributed data-intensive application in Google Beyond the traditional choices - normal component failures - huge files by traditional standards - appending new data rather than overwriting - co-designing the application and file system API GFS Interface - create, delete, open, close, read, write - snapshot & record append Master maintains all file system metadata, such as namespace, access control information, mapping from files to chunks and the location of chunks Clients interact with the master for metadata operations, but all data-bearing communication goes directly to the chunkservers Files are divided into fix-size(64MB) chunks, and each chunk is identified by immutable and global unique 64 bit chunk handle. Chunkservers store chunks on local disks as Linux files. In addition, each chunk is replicated on multiple chunkservers, in default, 3 replicas.
38. The client sends a write request to the primary once all the replicas have acknowledged receiving the data. The primary assigns consecutive serial numbers to all the mutations it receives and applies the mutation to its own local state in serial number order. Write Control and Data Flow The client asks the master which chunkserver holds the current lease for the chunk and the locations of the other replicas. If no one has, the master grants one to a replica it chooses. Error cases: Failed at the primary, it would not have been assigned a serial number and forwarded; Succeeded at primary and an arbitrary subset of the secondary replicas. The client code handles such errors by retrying the failed mutation. The primary forwards the write request to secondary replicas The client pushes the data to all replicas in any order. The master replies with the identity of primary and the locations of the other replicas. The client caches the information. The primary replies to the client. The secondaries all reply to the primary indicating that they have completed the operation.
39. Hadoop Distributed File System (HDFS) NameNode, a master server that manages the file system namespace and regulates access to files by clients. The Hadoop Distributed File System (HDFS) is an open source implementation of GFS DataNodes, manage storage attached to the nodes that they run on A file is split into one or more blocks and these blocks are stored in a set of DataNodes
40. Taobao File System Taobao File System(TFS) is a distributed file system optimized for the management of massive small files(1MB), such as pictures and descriptions of commodity Application/Client: access the name server & data server through TFSClient Name Sever: store metadata, monitor data server through heartbeat message, control IO balance, and data location info such <block id, data server> Data Sever: store application data, load blance, redundant backup
41.
42. effective distribution of data, file distribution is intelligently handled using elastic hash
43.
44. Sheepdog Automatically detect removed nodes Sheepdog is a distributed storage system for QEMU/KVM - amazon EBS-like volume pool - highly scalable, available and reliable - support for advanced volume management - not general file system, API is designed specific to QEMU Zero configuration about cluster nodes Automatically detect added nodes
45. Sheepdog Volumes are divided into 4 MB objects and each object is identified by globally unique 64 bit id, and replicated to multiple nodes Consistent hashing is used to decide which node to store objects. Each node is also placed on the ring.Addition or removal of nodes does not significantly change the mapping of objects
46. Reference [1] A. D. Luca and M. Bhide. Storage virtualization for dummies, Hitachi Data Systems Edition. Wiley Publishing, 2010. [2] S. Ghemawat, H. Gobioff, and S.-T. Leung. The google file system. In Proceedings of the 19th ACM Symposium on Operating Systems Principles 2003, SOSP 2003, Bolton Landing, NY, USA, October 19-22, 2003. [3] R. MacManus. The coming data explosion. Available: http://www.readwriteweb.com/archives/the_coming_data_explosion.php, 2010.
47. Reference (cont.) [4] Intel white paper: Object-based storage, the next wave of storage technology and devices, 2003. [5] M. Mesnier, G. R. Ganger and E. Riedel. Object-based storage. IEEE Communications Magazine, August 2003, 84-89. [6] Lustre. Available: http://wiki.lustre.org/index.php, 2010. [7] Panasas. Available: http://www.panasas.com/. [8] Hadoop. Available: http://hadoop.apache.org/. [9] tfs. Available: http://code.taobao.org/trac/tfs/wiki/intro. [10] GlusterFS. Available: http://www.gluster.org/.
48. Reference (cont.) [11] Sheep dog. Available: http://www.osrg.net/sheepdog/. [12] Ceph. Available: http://ceph.newdream.net/. [13] S. A. Weil, S. A. Brandt, E. L. Miller, D. D. E. Long. Ceph: A Scalable, High-Performance Distributed File System. In Proceedings of 7th Symposium on Operating Systems Design and Implementation (OSDI '06), November 6-8, Seattle, WA, USA. [14] Gluster Whitepaper: Gluster file system architecture.