O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else.The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow.One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write)Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality.1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)
Speculative Execution, Data rebalancing, Background Checksumming, etc.
Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries.MapReduce can run on top of HDFS or a selection of other storage systemsIntelligent scheduling algorithms for locality, sharing, and resource optimization.
HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletes
Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per byte basis and is expensive to maintain.Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically.Hadoop:A data grid operating systemStores Files (Unstructured)Stores 10s of petabytesProcesses 10s of PB/jobWeak ConsistencyScan all blocks in all filesQueries & Data ProcessingBatch response (>1sec)Relational Databases:An ACID Database systemStores Tables (Schema)Stores 100s of terabytesProcesses 10s of TB/queryTransactional ConsistencyLookup rows using indexMostly queriesInteractive responseHadoop Myths:Hadoop MapReduce requires Rocket ScientistsHadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)Hadoop is not very efficient hardware wiseHadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in softwareHadoop can’t do quick random lookupsHBase enables low-latency key-value pair lookups (no fast joins)Hadoop doesn’t support updates/inserts/deletesNot for multi-row transactions, but HBase enables transactions with row-level consistency semanticsHadoop isn’t highly availableThough Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for itHadoop can’t be backed-up/recovered quicklyHDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clustersHadoop doesn’t have securityHadoop has Unix style user/group permissions, and the community is working on improving its security modelHadoop can’t talk to other systemsHadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP
The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
Hive Features: A subset of SQL covering the most common statementsAgile data types: Array, Map, Struct, and JSON objectsUser Defined Functions and AggregatesRegular Expression supportMapReduce streaming supportJDBC/ODBC supportPartitions and Buckets (for performance optimization)In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/CollectMore details: http://wiki.apache.org/hadoop/HiveQuery: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALLJoin: LEFT, RIGHT, FULL, OUTER, INNERDDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONSDML: LOAD DATA INTO, FROM INSERTTypes: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECTQuery:Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE)Relational: IS NULL, IS NOT NULL, LIKE, REGEXPBuilt in aggregates: COUNT, MAX, MIN, AVG, SUMBuilt in functions: CAST, IF, REGEXP_REPLACE, …Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BYList and Map operators: array[i], map[k], struct.field
Think: SELECT word, count(*) FROM documents GROUP BY wordCheckout ParBASH:http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper.
Hadoop: An Industry Perspective
Outline<br />What is Hadoop?<br />Overview
of HDFS and MapReduce<br />How Hadoop augments an RDBMS?<br />Industry Business Needs:<br />Data Consolidation (Structured or Not)<br />Data Schema Agility (Evolve Schema Fast)<br />Query Language Flexibility (Data Engineering)<br />Data Economics (Store More for Longer)<br />Conclusion<br />
What is Hadoop?<br />A scalable
fault-tolerant distributed system for data storage and processing<br />Its scalability comes from the marriage of:<br />HDFS: Self-Healing High-Bandwidth Clustered Storage<br />MapReduce: Fault-Tolerant Distributed Processing<br />Operates on structured and complex data<br />A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)<br />Open source under the Apache License<br />http://wiki.apache.org/hadoop/<br />
Hadoop History<br />2002-2004: Doug Cutting
and Mike Cafarella started working on Nutch<br />2003-2004: Google publishes GFS and MapReduce papers <br />2004: Cutting adds DFS & MapReduce support to Nutch<br />2006: Yahoo! hires Cutting, Hadoop spins out of Nutch<br />2007: NY Times converts 4TB of archives over 100 EC2s<br />2008: Web-scale deployments at Y!, Facebook, Last.fm<br />April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes<br />May 2009:<br />Yahoo does fastest sort of a TB, 62secs over 1460 nodes<br />Yahoo sorts a PB in 16.25hours over 3658 nodes<br />June 2009, Oct 2009: Hadoop Summit, Hadoop World<br />September 2009: Doug Cutting joins Cloudera<br />
80% of this data will
be unstructured (complex)data</li></ul>IDC – 2008<br /><ul><li>85% of all corporate information is in unstructured (complex) forms
Growth of unstructured data (61.7%
CAGR) will far outpace that of transactional data</li></li></ul><li>Data Consolidation: One Place For All<br />Complex Data<br />Documents<br />Web feeds<br />System logs<br />Online forums<br />SharePoint<br />Sensor data<br />EMB archives<br />Images/Video<br />Structured Data (“relational”) <br />CRM<br />Financials<br />Logistics<br />Data Marts<br />Inventory<br />Sales records<br />HR records<br />Web Profiles<br />A single data system to enable processing across the universe of data types.<br />
Data Agility: Schema on Read
vs Write <br />Schema-on-Read:<br />Schema-on-Write:<br /><ul><li>Schema must be created before data is loaded.
Hive: A SQL interpreter on
top of MapReduce, also includes a meta-store mapping files to their schemas and associated SerDe’s. Hive also supports User-Defined-Functions and pluggable MapReduce streaming functions in any language.</li></li></ul><li>Hive Extensible Data Types<br /><ul><li>STRUCTS:
If ROB is < 1
then it will be buried into tape wasteland, thus we need cheaper active storage.</li></ul>High ROB<br />Low ROB<br />
Case Studies: Hadoop World ‘09<br
/>VISA: Large Scale Transaction Analysis<br />JP Morgan Chase: Data Processing for Financial Services<br />China Mobile: Data Mining Platform for Telecom Industry<br />Rackspace: Cross Data Center Log Processing<br />Booz Allen Hamilton: Protein Alignment using Hadoop<br />eHarmony: Matchmaking in the Hadoop Cloud<br />General Sentiment: Understanding Natural Language<br />Yahoo!: Social Graph Analysis<br />Visible Technologies: Real-Time Business Intelligence<br />Facebook: Rethinking the Data Warehouse with Hadoop and Hive<br />Slides and Videos at http://www.cloudera.com/hadoop-world-nyc<br />
Conclusion<br />Hadoop is a scalable
distributed data processing system which enables:<br />Consolidation (Structured or Not)<br />Data Agility (Evolving Schemas)<br />Query Flexibility (Any Language)<br />Economical Storage (ROB > 1)<br />
MapReduce: The Programming Model<br />SELECT
word, COUNT(1) FROM docs GROUP BY word;<br />cat *.txt | mapper.pl | sort | reducer.pl > out.txt<br />(docid, text)<br />(words, counts)<br />Map 1<br />(sorted words, counts)<br />Reduce 1<br />Output File 1<br />(sorted words, sum of counts)<br />Split 1<br />Be, 5<br />“To Be Or Not To Be?”<br />Be, 30<br />Be, 12<br />Reduce i<br />Output File i<br />(sorted words, sum of counts)<br />(docid, text)<br />Map i<br />Split i<br />Be, 7<br />Be, 6<br />Shuffle<br />Reduce R<br />Output File R<br />(sorted words, sum of counts)<br />(docid, text)<br />Map M<br />(sorted words, counts)<br />(words, counts)<br />Split N<br />
Hadoop High-Level Architecture<br />Hadoop Client<br
/>Contacts Name Node for data <br />or Job Tracker to submit jobs<br />Name Node<br />Maintains mapping of file blocks <br />to data node slaves<br />Job Tracker<br />Schedules jobs across <br />task tracker slaves<br />Data Node<br />Stores and serves blocks of data<br />Task Tracker<br />Runs tasks (work units) <br />within a job<br />Share Physical Node<br />
Economics of Hadoop Storage<br />Typical
Hardware:<br />Two Quad Core Nehalems<br />24GB RAM<br />12 * 1TB SATA disks (JBOD mode, no need for RAID)<br />1 Gigabit Ethernet card<br />Cost/node: $5K/node<br />Effective HDFS Space:<br />¼ reserved for temp shuffle space, which leaves 9TB/node<br />3 way replication leads to 3TB effective HDFS space/node<br />But assuming 7x compression that becomes ~ 20TB/node<br />Effective Cost per user TB: $250/TB<br />Other solutions cost in the range of $5K to $100K per user TB<br />