SlideShare a Scribd company logo
1 of 63
Download to read offline
Hadoop: Distributed Data Processing
      Amr Awadallah
      Founder/CTO, Cloudera, Inc.
      ACM Data Mining SIG
      Thursday, January 25th, 2010


Wednesday, January 27, 2010
Outline

      ▪Scaling for Large Data
       Processing
      ▪What is Hadoop?

      ▪HDFS and MapReduce

      ▪Hadoop Ecosystem

      ▪Hadoop vs RDBMSes

      ▪Conclusion
     Amr Awadallah, Cloudera Inc   2
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute




     Amr Awadallah, Cloudera Inc    3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute




                                   Collection
                               Instrumentation

     Amr Awadallah, Cloudera Inc                 3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute




        Storage Farm for Unstructured Data (20TB/day)
                                            Mostly Append
                                   Collection
                               Instrumentation

     Amr Awadallah, Cloudera Inc                            3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute

              Interactive Apps
          RDBMS (200GB/day)
                      ETL Grid



        Storage Farm for Unstructured Data (20TB/day)
                                            Mostly Append
                                   Collection
                               Instrumentation

     Amr Awadallah, Cloudera Inc                            3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute

              Interactive Apps
          RDBMS (200GB/day)
                      ETL Grid
                              Filer heads are a bottleneck

        Storage Farm for Unstructured Data (20TB/day)
                                                  Mostly Append
                                      Collection
                                   Instrumentation

     Amr Awadallah, Cloudera Inc                                  3
Wednesday, January 27, 2010
Current Storage Systems Can’t Compute

              Interactive Apps                   Ad hoc Queries &
                                                   Data Mining
          RDBMS (200GB/day)
                      ETL Grid                               Non-Consumption
                              Filer heads are a bottleneck

        Storage Farm for Unstructured Data (20TB/day)
                                                  Mostly Append
                                      Collection
                                   Instrumentation

     Amr Awadallah, Cloudera Inc                                        3
Wednesday, January 27, 2010
The Solution: A Store-Compute Grid




     Amr Awadallah, Cloudera Inc         4
Wednesday, January 27, 2010
The Solution: A Store-Compute Grid




                              Storage + Computation
                                             Mostly Append
                                     Collection
                                   Instrumentation

     Amr Awadallah, Cloudera Inc                             4
Wednesday, January 27, 2010
The Solution: A Store-Compute Grid

              Interactive Apps
                       RDBMS
        ETL and
      Aggregations


                              Storage + Computation
                                             Mostly Append
                                     Collection
                                   Instrumentation

     Amr Awadallah, Cloudera Inc                             4
Wednesday, January 27, 2010
The Solution: A Store-Compute Grid

              Interactive Apps                       “Batch” Apps
                       RDBMS
                                                           Ad hoc Queries
        ETL and                                            & Data Mining
      Aggregations


                              Storage + Computation
                                             Mostly Append
                                     Collection
                                   Instrumentation

     Amr Awadallah, Cloudera Inc                                     4
Wednesday, January 27, 2010
What is Hadoop?




     Amr Awadallah, Cloudera Inc   5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A  scalable fault-tolerant grid operating
        system for data storage and processing




     Amr Awadallah, Cloudera Inc                   5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ Its scalability comes from the marriage of:

         ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage
         ▪ MapReduce: Fault-Tolerant Distributed Processing




     Amr Awadallah, Cloudera Inc                          5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ Its scalability comes from the marriage of:

         ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage
         ▪ MapReduce: Fault-Tolerant Distributed Processing

      ▪ Operates              on unstructured and structured data




     Amr Awadallah, Cloudera Inc                             5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ Its scalability comes from the marriage of:

         ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage
         ▪ MapReduce: Fault-Tolerant Distributed Processing

      ▪ Operates on unstructured and structured data
      ▪ A large and active ecosystem (many developers
        and additions like HBase, Hive, Pig, …)



     Amr Awadallah, Cloudera Inc                          5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ Its scalability comes from the marriage of:

         ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage
         ▪ MapReduce: Fault-Tolerant Distributed Processing

      ▪ Operates on unstructured and structured data
      ▪ A large and active ecosystem (many developers
        and additions like HBase, Hive, Pig, …)
      ▪ Open source under the friendly Apache License




     Amr Awadallah, Cloudera Inc                          5
Wednesday, January 27, 2010
What is Hadoop?
      ▪A   scalable fault-tolerant grid operating
        system for data storage and processing
      ▪ Its scalability comes from the marriage of:

         ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage
         ▪ MapReduce: Fault-Tolerant Distributed Processing

      ▪ Operates on unstructured and structured data
      ▪ A large and active ecosystem (many developers
        and additions like HBase, Hive, Pig, …)
      ▪ Open source under the friendly Apache License

      ▪ http://wiki.apache.org/hadoop/


     Amr Awadallah, Cloudera Inc                          5
Wednesday, January 27, 2010
Hadoop History




     Amr Awadallah, Cloudera Inc   6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s
      ▪   2008: Web-scale deployments at Y!, Facebook, Last.fm




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s
      ▪   2008: Web-scale deployments at Y!, Facebook, Last.fm
      ▪   April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910
          nodes




     Amr Awadallah, Cloudera Inc                                       6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s
      ▪   2008: Web-scale deployments at Y!, Facebook, Last.fm
      ▪   April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910
          nodes
      ▪   May 2009:
          ▪   Yahoo does fastest sort of a TB, 62secs over 1460 nodes
          ▪   Yahoo sorts a PB in 16.25hours over 3658 nodes


     Amr Awadallah, Cloudera Inc                                        6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s
      ▪   2008: Web-scale deployments at Y!, Facebook, Last.fm
      ▪   April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910
          nodes
      ▪   May 2009:
          ▪   Yahoo does fastest sort of a TB, 62secs over 1460 nodes
          ▪   Yahoo sorts a PB in 16.25hours over 3658 nodes
      ▪   June 2009, Oct 2009: Hadoop Summit (750), Hadoop World
          (500)
     Amr Awadallah, Cloudera Inc                                        6
Wednesday, January 27, 2010
Hadoop History
      ▪   2002-2004: Doug Cutting and Mike Cafarella started working
          on Nutch
      ▪   2003-2004: Google publishes GFS and MapReduce papers
      ▪   2004: Cutting adds DFS & MapReduce support to Nutch
      ▪   2006: Yahoo! hires Cutting, Hadoop spins out of Nutch
      ▪   2007: NY Times converts 4TB of archives over 100 EC2s
      ▪   2008: Web-scale deployments at Y!, Facebook, Last.fm
      ▪   April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910
          nodes
      ▪   May 2009:
          ▪   Yahoo does fastest sort of a TB, 62secs over 1460 nodes
          ▪   Yahoo sorts a PB in 16.25hours over 3658 nodes
      ▪   June 2009, Oct 2009: Hadoop Summit (750), Hadoop World
          (500)
     Amr Awadallah, Cloudera Inc                                        6
      ▪   September 2009: Doug Cutting joins Cloudera
Wednesday, January 27, 2010
Hadoop Design Axioms




     Amr Awadallah, Cloudera Inc   7
Wednesday, January 27, 2010
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself




     Amr Awadallah, Cloudera Inc            7
Wednesday, January 27, 2010
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly




     Amr Awadallah, Cloudera Inc            7
Wednesday, January 27, 2010
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly
      3.    Compute Should Move to Data




     Amr Awadallah, Cloudera Inc            7
Wednesday, January 27, 2010
Hadoop Design Axioms



      1.    System Shall Manage and Heal Itself
      2.    Performance Shall Scale Linearly
      3.    Compute Should Move to Data
      4.    Simple Core, Modular and
            Extensible

     Amr Awadallah, Cloudera Inc            7
Wednesday, January 27, 2010
HDFS: Hadoop Distributed File System
       Block Size = 64MB
      Replication Factor = 3




   Cost/GB is a few ¢/month
         vs $/month
     Amr Awadallah, Cloudera Inc       8
Wednesday, January 27, 2010
HDFS: Hadoop Distributed File System
       Block Size = 64MB
      Replication Factor = 3




   Cost/GB is a few ¢/month
         vs $/month
     Amr Awadallah, Cloudera Inc       8
Wednesday, January 27, 2010
MapReduce: Distributed Processing




     Amr Awadallah, Cloudera Inc         9
Wednesday, January 27, 2010
MapReduce: Distributed Processing




     Amr Awadallah, Cloudera Inc         9
Wednesday, January 27, 2010
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sort | reducer.pl > out.txt

       Split 1




       Split i




       Split N


     Amr Awadallah, Cloudera Inc                      10
Wednesday, January 27, 2010
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sort | reducer.pl > out.txt
                                          (words, counts)
       Split 1   (docid, text)   Map 1
                                                    Be, 5
                  “To Be
                  Or Not
                 To Be?”
                                         Be, 12

       Split i   (docid, text)   Map i




                                         Be, 7
                                         Be, 6

       Split N   (docid, text)   Map M     (words, counts)


     Amr Awadallah, Cloudera Inc                             10
Wednesday, January 27, 2010
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sort | reducer.pl > out.txt
                                          (words, counts)
       Split 1   (docid, text)   Map 1                         (sorted words, counts)

                                                    Be, 5                               Reduce 1
                  “To Be
                  Or Not
                 To Be?”
                                         Be, 12
                                                                                        Reduce i
       Split i   (docid, text)   Map i




                                         Be, 7
                                         Be, 6
                                                             Shuffle
                                                                                        Reduce R
       Split N   (docid, text)   Map M     (words, counts)     (sorted words, counts)


     Amr Awadallah, Cloudera Inc                                                                   10
Wednesday, January 27, 2010
MapReduce Example for Word Count
         SELECT word, COUNT(1) FROM docs GROUP BY word;
      cat *.txt | mapper.pl | sort | reducer.pl > out.txt
                                          (words, counts)
       Split 1   (docid, text)   Map 1                         (sorted words, counts)
                                                                                                                    Output File
                                                    Be, 5                               Reduce 1   (sorted words,
                                                                                                   sum of counts)
                                                                                                                        1

                  “To Be
                  Or Not                                                                              Be, 30
                 To Be?”
                                         Be, 12
                                                                                                                    Output File i
                                                                                                   (sorted words,
                                                                                        Reduce i   sum of counts)
       Split i   (docid, text)   Map i




                                         Be, 7
                                         Be, 6
                                                             Shuffle                                                 Output File
                                                                                                   (sorted words,       R
                                                                                        Reduce R   sum of counts)
       Split N   (docid, text)   Map M     (words, counts)     (sorted words, counts)


     Amr Awadallah, Cloudera Inc                                                                                        10
Wednesday, January 27, 2010
Hadoop High-Level Architecture
                                                    Hadoop Client
                                               Contacts Name Node for data
                                               or Job Tracker to submit jobs




                    Name Node                                                      Job Tracker
            Maintains mapping of file blocks                                   Schedules jobs across
                  to data node slaves                                           task tracker slaves




                     Data Node                                                    Task Tracker
                  Stores and serves                                            Runs tasks (work units)
                    blocks of data                                                  within a job
                                               Share Physical Node




     Amr Awadallah, Cloudera Inc                                                                         11
Wednesday, January 27, 2010
Apache Hadoop Ecosystem




                 MapReduce (Job Scheduling/Execution System)




                                           HDFS
                              (Hadoop Distributed File System)



     Amr Awadallah, Cloudera Inc                                 12
Wednesday, January 27, 2010
Apache Hadoop Ecosystem
        Zookeepr (Coordination)




                                                                                Avro (Serialization)
                                  MapReduce (Job Scheduling/Execution System)




                                                     HDFS
                                        (Hadoop Distributed File System)



     Amr Awadallah, Cloudera Inc                                                                 12
Wednesday, January 27, 2010
Apache Hadoop Ecosystem
        Zookeepr (Coordination)




                                                                                Avro (Serialization)
                                  MapReduce (Job Scheduling/Execution System)

                                  HBase (key-value store)


                                                      HDFS
                                         (Hadoop Distributed File System)



     Amr Awadallah, Cloudera Inc                                                                 12
Wednesday, January 27, 2010
Apache Hadoop Ecosystem

                                    ETL Tools        BI Reporting      RDBMS

                                  Pig (Data Flow)    Hive (SQL)         Sqoop
        Zookeepr (Coordination)




                                                                                     Avro (Serialization)
                                  MapReduce (Job Scheduling/Execution System)

                                  HBase (key-value store)   (Streaming/Pipes APIs)


                                                      HDFS
                                         (Hadoop Distributed File System)



     Amr Awadallah, Cloudera Inc                                                                      12
Wednesday, January 27, 2010
Use The Right Tool For The Right Job
      Hadoop:                      Relational Databases:




     Amr Awadallah, Cloudera Inc                           13
Wednesday, January 27, 2010
Use The Right Tool For The Right Job
      Hadoop:                      Relational Databases:




     Amr Awadallah, Cloudera Inc                           13
Wednesday, January 27, 2010
Use The Right Tool For The Right Job
      Hadoop:                               Relational Databases:




      When to use?                      When to use?
      •   Affordable Storage/           •   Interactive Reporting
          Compute                           (<1sec)
      •   Structured or Not (Agility)   •   Multistep Transactions
      •   Resilient Auto Scalability    •   Interoperability
     Amr Awadallah, Cloudera Inc                                     13
Wednesday, January 27, 2010
Economics of Hadoop




     Amr Awadallah, Cloudera Inc   14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   12 * 1TB SATA disks (JBOD mode, no need for RAID)
          ▪   1 Gigabit Ethernet card




     Amr Awadallah, Cloudera Inc                                  14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   12 * 1TB SATA disks (JBOD mode, no need for RAID)
          ▪   1 Gigabit Ethernet card
      ▪   Cost/node: $5K/node




     Amr Awadallah, Cloudera Inc                                  14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   12 * 1TB SATA disks (JBOD mode, no need for RAID)
          ▪   1 Gigabit Ethernet card
      ▪   Cost/node: $5K/node
      ▪   Effective HDFS Space:
          ▪   ¼ reserved for temp shuffle space, which leaves 9TB/node
          ▪   3 way replication leads to 3TB effective HDFS space/node
          ▪   But assuming 7x compression that becomes ~ 20TB/node




     Amr Awadallah, Cloudera Inc                                     14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   12 * 1TB SATA disks (JBOD mode, no need for RAID)
          ▪   1 Gigabit Ethernet card
      ▪   Cost/node: $5K/node
      ▪   Effective HDFS Space:
          ▪   ¼ reserved for temp shuffle space, which leaves 9TB/node
          ▪   3 way replication leads to 3TB effective HDFS space/node
          ▪   But assuming 7x compression that becomes ~ 20TB/node
      Effective Cost per user TB: $250/TB


     Amr Awadallah, Cloudera Inc                                     14
Wednesday, January 27, 2010
Economics of Hadoop
      ▪   Typical Hardware:
          ▪ Two Quad Core Nehalems

          ▪   24GB RAM
          ▪   12 * 1TB SATA disks (JBOD mode, no need for RAID)
          ▪   1 Gigabit Ethernet card
      ▪   Cost/node: $5K/node
      ▪   Effective HDFS Space:
          ▪   ¼ reserved for temp shuffle space, which leaves 9TB/node
          ▪   3 way replication leads to 3TB effective HDFS space/node
          ▪   But assuming 7x compression that becomes ~ 20TB/node
      Effective Cost per user TB: $250/TB
      Other solutions cost in the range of $5K to $100K per
       user TB
     Amr Awadallah, Cloudera Inc                                     14
Wednesday, January 27, 2010
Sample Talks from Hadoop World ‘09
      ▪   VISA: Large Scale Transaction Analysis
      ▪   JP Morgan Chase: Data Processing for Financial Services
      ▪   China Mobile: Data Mining Platform for Telecom Industry
      ▪   Rackspace: Cross Data Center Log Processing
      ▪   Booz Allen Hamilton: Protein Alignment using Hadoop
      ▪   eHarmony: Matchmaking in the Hadoop Cloud
      ▪   General Sentiment: Understanding Natural Language
      ▪   Yahoo!: Social Graph Analysis
      ▪   Visible Technologies: Real-Time Business Intelligence
      ▪   Facebook: Rethinking the Data Warehouse with Hadoop and
                                    Hive

       Slides and Videos at http://www.cloudera.com/hadoop-
     Amr Awadallah, Cloudera Inc world-nyc                  15
Wednesday, January 27, 2010
Cloudera Desktop




     Amr Awadallah, Cloudera Inc   16
Wednesday, January 27, 2010
Conclusion




     Amr Awadallah, Cloudera Inc   17
Wednesday, January 27, 2010
Conclusion


              Hadoop is a data grid
            operating system which
            provides an economically
          scalable solution for storing
         and processing large amounts
         of unstructured or structured
           data over long periods of
                      time.
     Amr Awadallah, Cloudera Inc     17
Wednesday, January 27, 2010
Contact Information

                        Amr Awadallah
                     CTO, Cloudera Inc.
                     aaa@cloudera.com
            http://twitter.com/awadallah


          Online Training Videos and Info:
           http://cloudera.com/hadoop-
                       training
               http://cloudera.com/blog
             http://twitter.com/cloudera


     Amr Awadallah, Cloudera Inc             18
Wednesday, January 27, 2010
(c) 2008 Cloudera, Inc. or its licensors.  "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0




Wednesday, January 27, 2010

More Related Content

What's hot

Introduction to Hadoop Technology
Introduction to Hadoop TechnologyIntroduction to Hadoop Technology
Introduction to Hadoop TechnologyManish Borkar
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation HadoopVarun Narang
 
Hadoop for beginners free course ppt
Hadoop for beginners   free course pptHadoop for beginners   free course ppt
Hadoop for beginners free course pptNjain85
 
Hadoop Ecosystem Architecture Overview
Hadoop Ecosystem Architecture Overview Hadoop Ecosystem Architecture Overview
Hadoop Ecosystem Architecture Overview Senthil Kumar
 
Presentation on Hadoop Technology
Presentation on Hadoop TechnologyPresentation on Hadoop Technology
Presentation on Hadoop TechnologyOpenDev
 
Facebooks Petabyte Scale Data Warehouse using Hive and Hadoop
Facebooks Petabyte Scale Data Warehouse using Hive and HadoopFacebooks Petabyte Scale Data Warehouse using Hive and Hadoop
Facebooks Petabyte Scale Data Warehouse using Hive and Hadooproyans
 
Hadoop Presentation - PPT
Hadoop Presentation - PPTHadoop Presentation - PPT
Hadoop Presentation - PPTAnand Pandey
 
Apache Hadoop
Apache HadoopApache Hadoop
Apache HadoopAjit Koti
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and HadoopFlavio Vit
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoopVarun Narang
 

What's hot (20)

Hadoop seminar
Hadoop seminarHadoop seminar
Hadoop seminar
 
Introduction to Hadoop Technology
Introduction to Hadoop TechnologyIntroduction to Hadoop Technology
Introduction to Hadoop Technology
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation Hadoop
 
Hadoop for beginners free course ppt
Hadoop for beginners   free course pptHadoop for beginners   free course ppt
Hadoop for beginners free course ppt
 
Hadoop and Big Data
Hadoop and Big DataHadoop and Big Data
Hadoop and Big Data
 
Hadoop Ecosystem Architecture Overview
Hadoop Ecosystem Architecture Overview Hadoop Ecosystem Architecture Overview
Hadoop Ecosystem Architecture Overview
 
Hadoop Technology
Hadoop TechnologyHadoop Technology
Hadoop Technology
 
Hadoop and big data
Hadoop and big dataHadoop and big data
Hadoop and big data
 
Hadoop
HadoopHadoop
Hadoop
 
Presentation on Hadoop Technology
Presentation on Hadoop TechnologyPresentation on Hadoop Technology
Presentation on Hadoop Technology
 
Facebooks Petabyte Scale Data Warehouse using Hive and Hadoop
Facebooks Petabyte Scale Data Warehouse using Hive and HadoopFacebooks Petabyte Scale Data Warehouse using Hive and Hadoop
Facebooks Petabyte Scale Data Warehouse using Hive and Hadoop
 
Big data and Hadoop
Big data and HadoopBig data and Hadoop
Big data and Hadoop
 
Hadoop Presentation - PPT
Hadoop Presentation - PPTHadoop Presentation - PPT
Hadoop Presentation - PPT
 
Apache Hadoop
Apache HadoopApache Hadoop
Apache Hadoop
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 
Seminar_Report_hadoop
Seminar_Report_hadoopSeminar_Report_hadoop
Seminar_Report_hadoop
 
PPT on Hadoop
PPT on HadoopPPT on Hadoop
PPT on Hadoop
 
Hadoop Tutorial For Beginners
Hadoop Tutorial For BeginnersHadoop Tutorial For Beginners
Hadoop Tutorial For Beginners
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop Technologies
Hadoop TechnologiesHadoop Technologies
Hadoop Technologies
 

Viewers also liked

Distributed Processing
Distributed ProcessingDistributed Processing
Distributed ProcessingImtiaz Hussain
 
Distributed processing
Distributed processingDistributed processing
Distributed processingNeil Stein
 
Distributed Data Processing Workshop - SBU
Distributed Data Processing Workshop - SBUDistributed Data Processing Workshop - SBU
Distributed Data Processing Workshop - SBUAmir Sedighi
 
Distributed Database Management System
Distributed Database Management SystemDistributed Database Management System
Distributed Database Management SystemHardik Patil
 
Distributed Database System
Distributed Database SystemDistributed Database System
Distributed Database SystemSulemang
 
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...Introduction to Distributed Computing Engines for Data Processing - Simone Ro...
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...Data Science Milan
 

Viewers also liked (9)

Distributed Processing
Distributed ProcessingDistributed Processing
Distributed Processing
 
Distributed processing
Distributed processingDistributed processing
Distributed processing
 
Distributed Data Processing Workshop - SBU
Distributed Data Processing Workshop - SBUDistributed Data Processing Workshop - SBU
Distributed Data Processing Workshop - SBU
 
Distributed computing
Distributed computingDistributed computing
Distributed computing
 
Distributed Database Management System
Distributed Database Management SystemDistributed Database Management System
Distributed Database Management System
 
Data Processing
Data ProcessingData Processing
Data Processing
 
Distributed database
Distributed databaseDistributed database
Distributed database
 
Distributed Database System
Distributed Database SystemDistributed Database System
Distributed Database System
 
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...Introduction to Distributed Computing Engines for Data Processing - Simone Ro...
Introduction to Distributed Computing Engines for Data Processing - Simone Ro...
 

Similar to Hadoop: Distributed data processing

Integrating hadoop - Big Data TechCon 2013
Integrating hadoop - Big Data TechCon 2013Integrating hadoop - Big Data TechCon 2013
Integrating hadoop - Big Data TechCon 2013Jonathan Seidman
 
The Time Has Come for Big-Data-as-a-Service
The Time Has Come for Big-Data-as-a-ServiceThe Time Has Come for Big-Data-as-a-Service
The Time Has Come for Big-Data-as-a-ServiceBlueData, Inc.
 
The power of hadoop in cloud computing
The power of hadoop in cloud computingThe power of hadoop in cloud computing
The power of hadoop in cloud computingJoey Echeverria
 
Semantic web meetup 14.november 2013
Semantic web meetup 14.november 2013Semantic web meetup 14.november 2013
Semantic web meetup 14.november 2013Jean-Pierre König
 
Hadoop World 2010: Productionizing Hadoop: Lessons Learned
Hadoop World 2010: Productionizing Hadoop: Lessons LearnedHadoop World 2010: Productionizing Hadoop: Lessons Learned
Hadoop World 2010: Productionizing Hadoop: Lessons LearnedCloudera, Inc.
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.MaharajothiP
 
Hadoop As The Platform For The Smartgrid At TVA
Hadoop As The Platform For The Smartgrid At TVAHadoop As The Platform For The Smartgrid At TVA
Hadoop As The Platform For The Smartgrid At TVACloudera, Inc.
 
Hadoop Platforms - Introduction, Importance, Providers
Hadoop Platforms - Introduction, Importance, ProvidersHadoop Platforms - Introduction, Importance, Providers
Hadoop Platforms - Introduction, Importance, ProvidersMrigendra Sharma
 
Big data and hadoop overvew
Big data and hadoop overvewBig data and hadoop overvew
Big data and hadoop overvewKunal Khanna
 
Business Intelligence and Data Analytics Revolutionized with Apache Hadoop
Business Intelligence and Data Analytics Revolutionized with Apache HadoopBusiness Intelligence and Data Analytics Revolutionized with Apache Hadoop
Business Intelligence and Data Analytics Revolutionized with Apache HadoopCloudera, Inc.
 
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...Amr Awadallah
 
Hadoop Essentials -- The What, Why and How to Meet Agency Objectives
Hadoop Essentials -- The What, Why and How to Meet Agency ObjectivesHadoop Essentials -- The What, Why and How to Meet Agency Objectives
Hadoop Essentials -- The What, Why and How to Meet Agency ObjectivesCloudera, Inc.
 
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...Cloudera, Inc.
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016StampedeCon
 
Data Science Day New York: The Platform for Big Data
Data Science Day New York: The Platform for Big DataData Science Day New York: The Platform for Big Data
Data Science Day New York: The Platform for Big DataCloudera, Inc.
 

Similar to Hadoop: Distributed data processing (20)

Integrating hadoop - Big Data TechCon 2013
Integrating hadoop - Big Data TechCon 2013Integrating hadoop - Big Data TechCon 2013
Integrating hadoop - Big Data TechCon 2013
 
The Time Has Come for Big-Data-as-a-Service
The Time Has Come for Big-Data-as-a-ServiceThe Time Has Come for Big-Data-as-a-Service
The Time Has Come for Big-Data-as-a-Service
 
The power of hadoop in cloud computing
The power of hadoop in cloud computingThe power of hadoop in cloud computing
The power of hadoop in cloud computing
 
Semantic web meetup 14.november 2013
Semantic web meetup 14.november 2013Semantic web meetup 14.november 2013
Semantic web meetup 14.november 2013
 
Hadoop in a Nutshell
Hadoop in a NutshellHadoop in a Nutshell
Hadoop in a Nutshell
 
Hadoop World 2010: Productionizing Hadoop: Lessons Learned
Hadoop World 2010: Productionizing Hadoop: Lessons LearnedHadoop World 2010: Productionizing Hadoop: Lessons Learned
Hadoop World 2010: Productionizing Hadoop: Lessons Learned
 
Hadoop
HadoopHadoop
Hadoop
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
 
Hadoop As The Platform For The Smartgrid At TVA
Hadoop As The Platform For The Smartgrid At TVAHadoop As The Platform For The Smartgrid At TVA
Hadoop As The Platform For The Smartgrid At TVA
 
Hadoop Platforms - Introduction, Importance, Providers
Hadoop Platforms - Introduction, Importance, ProvidersHadoop Platforms - Introduction, Importance, Providers
Hadoop Platforms - Introduction, Importance, Providers
 
Big data and hadoop overvew
Big data and hadoop overvewBig data and hadoop overvew
Big data and hadoop overvew
 
Business Intelligence and Data Analytics Revolutionized with Apache Hadoop
Business Intelligence and Data Analytics Revolutionized with Apache HadoopBusiness Intelligence and Data Analytics Revolutionized with Apache Hadoop
Business Intelligence and Data Analytics Revolutionized with Apache Hadoop
 
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
 
Tera data
Tera dataTera data
Tera data
 
Hadoop Essentials -- The What, Why and How to Meet Agency Objectives
Hadoop Essentials -- The What, Why and How to Meet Agency ObjectivesHadoop Essentials -- The What, Why and How to Meet Agency Objectives
Hadoop Essentials -- The What, Why and How to Meet Agency Objectives
 
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...
Hadoop World 2011: How Hadoop Revolutionized Business Intelligence and Advanc...
 
Tera data
Tera dataTera data
Tera data
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016
 
Hadoop
HadoopHadoop
Hadoop
 
Data Science Day New York: The Platform for Big Data
Data Science Day New York: The Platform for Big DataData Science Day New York: The Platform for Big Data
Data Science Day New York: The Platform for Big Data
 

More from royans

Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystemsroyans
 
Flickr Services
Flickr ServicesFlickr Services
Flickr Servicesroyans
 
Web20expo Scalable Web Arch
Web20expo Scalable Web ArchWeb20expo Scalable Web Arch
Web20expo Scalable Web Archroyans
 
Flickr Php
Flickr PhpFlickr Php
Flickr Phproyans
 
Grid – Distributed Computing at Scale
Grid – Distributed Computing at ScaleGrid – Distributed Computing at Scale
Grid – Distributed Computing at Scaleroyans
 
How Typepad changed their architecture without taking down the service
How Typepad changed their architecture without taking down the serviceHow Typepad changed their architecture without taking down the service
How Typepad changed their architecture without taking down the serviceroyans
 
Dmk Bo2 K7 Web
Dmk Bo2 K7 WebDmk Bo2 K7 Web
Dmk Bo2 K7 Webroyans
 
21 Www Web Services
21 Www Web Services21 Www Web Services
21 Www Web Servicesroyans
 
Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystemsroyans
 
Web Design World Flickr
Web Design World FlickrWeb Design World Flickr
Web Design World Flickrroyans
 
Flickr Services
Flickr ServicesFlickr Services
Flickr Servicesroyans
 
Filesystems
FilesystemsFilesystems
Filesystemsroyans
 
Scalable Web Arch
Scalable Web ArchScalable Web Arch
Scalable Web Archroyans
 
Web 2.0 Summit Flickr
Web 2.0 Summit FlickrWeb 2.0 Summit Flickr
Web 2.0 Summit Flickrroyans
 
Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystemsroyans
 
Etech2005
Etech2005Etech2005
Etech2005royans
 

More from royans (16)

Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystems
 
Flickr Services
Flickr ServicesFlickr Services
Flickr Services
 
Web20expo Scalable Web Arch
Web20expo Scalable Web ArchWeb20expo Scalable Web Arch
Web20expo Scalable Web Arch
 
Flickr Php
Flickr PhpFlickr Php
Flickr Php
 
Grid – Distributed Computing at Scale
Grid – Distributed Computing at ScaleGrid – Distributed Computing at Scale
Grid – Distributed Computing at Scale
 
How Typepad changed their architecture without taking down the service
How Typepad changed their architecture without taking down the serviceHow Typepad changed their architecture without taking down the service
How Typepad changed their architecture without taking down the service
 
Dmk Bo2 K7 Web
Dmk Bo2 K7 WebDmk Bo2 K7 Web
Dmk Bo2 K7 Web
 
21 Www Web Services
21 Www Web Services21 Www Web Services
21 Www Web Services
 
Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystems
 
Web Design World Flickr
Web Design World FlickrWeb Design World Flickr
Web Design World Flickr
 
Flickr Services
Flickr ServicesFlickr Services
Flickr Services
 
Filesystems
FilesystemsFilesystems
Filesystems
 
Scalable Web Arch
Scalable Web ArchScalable Web Arch
Scalable Web Arch
 
Web 2.0 Summit Flickr
Web 2.0 Summit FlickrWeb 2.0 Summit Flickr
Web 2.0 Summit Flickr
 
Web20expo Filesystems
Web20expo FilesystemsWeb20expo Filesystems
Web20expo Filesystems
 
Etech2005
Etech2005Etech2005
Etech2005
 

Recently uploaded

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 

Recently uploaded (20)

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 

Hadoop: Distributed data processing

  • 1. Hadoop: Distributed Data Processing Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25th, 2010 Wednesday, January 27, 2010
  • 2. Outline ▪Scaling for Large Data Processing ▪What is Hadoop? ▪HDFS and MapReduce ▪Hadoop Ecosystem ▪Hadoop vs RDBMSes ▪Conclusion Amr Awadallah, Cloudera Inc 2 Wednesday, January 27, 2010
  • 3. Current Storage Systems Can’t Compute Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 4. Current Storage Systems Can’t Compute Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 5. Current Storage Systems Can’t Compute Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 6. Current Storage Systems Can’t Compute Interactive Apps RDBMS (200GB/day) ETL Grid Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 7. Current Storage Systems Can’t Compute Interactive Apps RDBMS (200GB/day) ETL Grid Filer heads are a bottleneck Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 8. Current Storage Systems Can’t Compute Interactive Apps Ad hoc Queries & Data Mining RDBMS (200GB/day) ETL Grid Non-Consumption Filer heads are a bottleneck Storage Farm for Unstructured Data (20TB/day) Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 3 Wednesday, January 27, 2010
  • 9. The Solution: A Store-Compute Grid Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  • 10. The Solution: A Store-Compute Grid Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  • 11. The Solution: A Store-Compute Grid Interactive Apps RDBMS ETL and Aggregations Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  • 12. The Solution: A Store-Compute Grid Interactive Apps “Batch” Apps RDBMS Ad hoc Queries ETL and & Data Mining Aggregations Storage + Computation Mostly Append Collection Instrumentation Amr Awadallah, Cloudera Inc 4 Wednesday, January 27, 2010
  • 13. What is Hadoop? Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 14. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 15. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 16. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 17. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 18. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) ▪ Open source under the friendly Apache License Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 19. What is Hadoop? ▪A scalable fault-tolerant grid operating system for data storage and processing ▪ Its scalability comes from the marriage of: ▪ HDFS: Self-Healing High-Bandwidth Clustered Storage ▪ MapReduce: Fault-Tolerant Distributed Processing ▪ Operates on unstructured and structured data ▪ A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) ▪ Open source under the friendly Apache License ▪ http://wiki.apache.org/hadoop/ Amr Awadallah, Cloudera Inc 5 Wednesday, January 27, 2010
  • 20. Hadoop History Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 21. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 22. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 23. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 24. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 25. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 26. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 27. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 28. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 29. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes ▪ June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500) Amr Awadallah, Cloudera Inc 6 Wednesday, January 27, 2010
  • 30. Hadoop History ▪ 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch ▪ 2003-2004: Google publishes GFS and MapReduce papers ▪ 2004: Cutting adds DFS & MapReduce support to Nutch ▪ 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch ▪ 2007: NY Times converts 4TB of archives over 100 EC2s ▪ 2008: Web-scale deployments at Y!, Facebook, Last.fm ▪ April 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodes ▪ May 2009: ▪ Yahoo does fastest sort of a TB, 62secs over 1460 nodes ▪ Yahoo sorts a PB in 16.25hours over 3658 nodes ▪ June 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500) Amr Awadallah, Cloudera Inc 6 ▪ September 2009: Doug Cutting joins Cloudera Wednesday, January 27, 2010
  • 31. Hadoop Design Axioms Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  • 32. Hadoop Design Axioms 1. System Shall Manage and Heal Itself Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  • 33. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  • 34. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Should Move to Data Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  • 35. Hadoop Design Axioms 1. System Shall Manage and Heal Itself 2. Performance Shall Scale Linearly 3. Compute Should Move to Data 4. Simple Core, Modular and Extensible Amr Awadallah, Cloudera Inc 7 Wednesday, January 27, 2010
  • 36. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month Amr Awadallah, Cloudera Inc 8 Wednesday, January 27, 2010
  • 37. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month Amr Awadallah, Cloudera Inc 8 Wednesday, January 27, 2010
  • 38. MapReduce: Distributed Processing Amr Awadallah, Cloudera Inc 9 Wednesday, January 27, 2010
  • 39. MapReduce: Distributed Processing Amr Awadallah, Cloudera Inc 9 Wednesday, January 27, 2010
  • 40. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  • 41. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 Be, 5 “To Be Or Not To Be?” Be, 12 Split i (docid, text) Map i Be, 7 Be, 6 Split N (docid, text) Map M (words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  • 42. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 (sorted words, counts) Be, 5 Reduce 1 “To Be Or Not To Be?” Be, 12 Reduce i Split i (docid, text) Map i Be, 7 Be, 6 Shuffle Reduce R Split N (docid, text) Map M (words, counts) (sorted words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  • 43. MapReduce Example for Word Count SELECT word, COUNT(1) FROM docs GROUP BY word; cat *.txt | mapper.pl | sort | reducer.pl > out.txt (words, counts) Split 1 (docid, text) Map 1 (sorted words, counts) Output File Be, 5 Reduce 1 (sorted words, sum of counts) 1 “To Be Or Not Be, 30 To Be?” Be, 12 Output File i (sorted words, Reduce i sum of counts) Split i (docid, text) Map i Be, 7 Be, 6 Shuffle Output File (sorted words, R Reduce R sum of counts) Split N (docid, text) Map M (words, counts) (sorted words, counts) Amr Awadallah, Cloudera Inc 10 Wednesday, January 27, 2010
  • 44. Hadoop High-Level Architecture Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Name Node Job Tracker Maintains mapping of file blocks Schedules jobs across to data node slaves task tracker slaves Data Node Task Tracker Stores and serves Runs tasks (work units) blocks of data within a job Share Physical Node Amr Awadallah, Cloudera Inc 11 Wednesday, January 27, 2010
  • 45. Apache Hadoop Ecosystem MapReduce (Job Scheduling/Execution System) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  • 46. Apache Hadoop Ecosystem Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  • 47. Apache Hadoop Ecosystem Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HBase (key-value store) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  • 48. Apache Hadoop Ecosystem ETL Tools BI Reporting RDBMS Pig (Data Flow) Hive (SQL) Sqoop Zookeepr (Coordination) Avro (Serialization) MapReduce (Job Scheduling/Execution System) HBase (key-value store) (Streaming/Pipes APIs) HDFS (Hadoop Distributed File System) Amr Awadallah, Cloudera Inc 12 Wednesday, January 27, 2010
  • 49. Use The Right Tool For The Right Job Hadoop: Relational Databases: Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  • 50. Use The Right Tool For The Right Job Hadoop: Relational Databases: Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  • 51. Use The Right Tool For The Right Job Hadoop: Relational Databases: When to use? When to use? • Affordable Storage/ • Interactive Reporting Compute (<1sec) • Structured or Not (Agility) • Multistep Transactions • Resilient Auto Scalability • Interoperability Amr Awadallah, Cloudera Inc 13 Wednesday, January 27, 2010
  • 52. Economics of Hadoop Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 53. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 54. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 55. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 56. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 57. Economics of Hadoop ▪ Typical Hardware: ▪ Two Quad Core Nehalems ▪ 24GB RAM ▪ 12 * 1TB SATA disks (JBOD mode, no need for RAID) ▪ 1 Gigabit Ethernet card ▪ Cost/node: $5K/node ▪ Effective HDFS Space: ▪ ¼ reserved for temp shuffle space, which leaves 9TB/node ▪ 3 way replication leads to 3TB effective HDFS space/node ▪ But assuming 7x compression that becomes ~ 20TB/node Effective Cost per user TB: $250/TB Other solutions cost in the range of $5K to $100K per user TB Amr Awadallah, Cloudera Inc 14 Wednesday, January 27, 2010
  • 58. Sample Talks from Hadoop World ‘09 ▪ VISA: Large Scale Transaction Analysis ▪ JP Morgan Chase: Data Processing for Financial Services ▪ China Mobile: Data Mining Platform for Telecom Industry ▪ Rackspace: Cross Data Center Log Processing ▪ Booz Allen Hamilton: Protein Alignment using Hadoop ▪ eHarmony: Matchmaking in the Hadoop Cloud ▪ General Sentiment: Understanding Natural Language ▪ Yahoo!: Social Graph Analysis ▪ Visible Technologies: Real-Time Business Intelligence ▪ Facebook: Rethinking the Data Warehouse with Hadoop and Hive Slides and Videos at http://www.cloudera.com/hadoop- Amr Awadallah, Cloudera Inc world-nyc 15 Wednesday, January 27, 2010
  • 59. Cloudera Desktop Amr Awadallah, Cloudera Inc 16 Wednesday, January 27, 2010
  • 60. Conclusion Amr Awadallah, Cloudera Inc 17 Wednesday, January 27, 2010
  • 61. Conclusion Hadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time. Amr Awadallah, Cloudera Inc 17 Wednesday, January 27, 2010
  • 62. Contact Information Amr Awadallah CTO, Cloudera Inc. aaa@cloudera.com http://twitter.com/awadallah Online Training Videos and Info: http://cloudera.com/hadoop- training http://cloudera.com/blog http://twitter.com/cloudera Amr Awadallah, Cloudera Inc 18 Wednesday, January 27, 2010
  • 63. (c) 2008 Cloudera, Inc. or its licensors.  "Cloudera" is a registered trademark of Cloudera, Inc.. All rights reserved. 1.0 Wednesday, January 27, 2010