Hadoop installation, Configuration, and Mapreduce programPraveen Kumar Donta
This presentation contains brief description about big data along with that hadoop installation, configuration and MapReduce wordcount program and its explanation.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
Building the Data Lake with Azure Data Factory and Data Lake AnalyticsKhalid Salama
In essence, a data lake is commodity distributed file system that acts as a repository to hold raw data file extracts of all the enterprise source systems, so that it can serve the data management and analytics needs of the business. A data lake system provides means to ingest data, perform scalable big data processing, and serve information, in addition to manage, monitor and secure the it environment. In these slide, we discuss building data lakes using Azure Data Factory and Data Lake Analytics. We delve into the architecture if the data lake and explore its various components. We also describe the various data ingestion scenarios and considerations. We introduce the Azure Data Lake Store, then we discuss how to build Azure Data Factory pipeline to ingest the data lake. After that, we move into big data processing using Data Lake Analytics, and we delve into U-SQL.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. As an Azure service, customers automatically benefit from the native integration with other Azure services such as Power BI, SQL Data Warehouse, and Cosmos DB, as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs.
Hadoop installation, Configuration, and Mapreduce programPraveen Kumar Donta
This presentation contains brief description about big data along with that hadoop installation, configuration and MapReduce wordcount program and its explanation.
Democratizing Data Quality Through a Centralized PlatformDatabricks
Bad data leads to bad decisions and broken customer experiences. Organizations depend on complete and accurate data to power their business, maintain efficiency, and uphold customer trust. With thousands of datasets and pipelines running, how do we ensure that all data meets quality standards, and that expectations are clear between producers and consumers? Investing in shared, flexible components and practices for monitoring data health is crucial for a complex data organization to rapidly and effectively scale.
At Zillow, we built a centralized platform to meet our data quality needs across stakeholders. The platform is accessible to engineers, scientists, and analysts, and seamlessly integrates with existing data pipelines and data discovery tools. In this presentation, we will provide an overview of our platform’s capabilities, including:
Giving producers and consumers the ability to define and view data quality expectations using a self-service onboarding portal
Performing data quality validations using libraries built to work with spark
Dynamically generating pipelines that can be abstracted away from users
Flagging data that doesn’t meet quality standards at the earliest stage and giving producers the opportunity to resolve issues before use by downstream consumers
Exposing data quality metrics alongside each dataset to provide producers and consumers with a comprehensive picture of health over time
Building the Data Lake with Azure Data Factory and Data Lake AnalyticsKhalid Salama
In essence, a data lake is commodity distributed file system that acts as a repository to hold raw data file extracts of all the enterprise source systems, so that it can serve the data management and analytics needs of the business. A data lake system provides means to ingest data, perform scalable big data processing, and serve information, in addition to manage, monitor and secure the it environment. In these slide, we discuss building data lakes using Azure Data Factory and Data Lake Analytics. We delve into the architecture if the data lake and explore its various components. We also describe the various data ingestion scenarios and considerations. We introduce the Azure Data Lake Store, then we discuss how to build Azure Data Factory pipeline to ingest the data lake. After that, we move into big data processing using Data Lake Analytics, and we delve into U-SQL.
Azure Databricks is a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure. Designed in collaboration with the founders of Apache Spark, Azure Databricks combines the best of Databricks and Azure to help customers accelerate innovation with one-click set up, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts. As an Azure service, customers automatically benefit from the native integration with other Azure services such as Power BI, SQL Data Warehouse, and Cosmos DB, as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs.
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...HostedbyConfluent
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Ethan Guo | Current 2022
Back in 2016, Apache Hudi brought transactions, change capture on top of data lakes, what is today referred to as the Lakehouse architecture. In this session, we first introduce Apache Hudi and the key technology gaps it fills in the modern data architecture. Bridging traditional data lakes and warehouses, Hudi helps realize the Lakehouse vision, by bringing transactions, optimized table metadata to data lakes and powerful storage layout optimizations, moving them closer to cloud warehouses of today. Viewed from a data engineering lens, Hudi also plays a key unifying role between the batch and stream processing worlds, by acting as a columnar, server-less ""state store"" for batch jobs, ushering in what we call the incremental processing model, where batch jobs can consume new data, update/delete intermediate results in a Hudi table, instead of re-computing/re-write entire output like old-school big batch jobs.
Rest of talk focusses on a deep dive into the some of the time-tested design choices and tradeoffs in Hudi, that helps power some of the largest transactional data lakes on the planet today. We will start by describing a tour of the storage format design, including data, metadata layouts and of course Hudi's timeline, an event log that is central to implementing ACID transactions and concurrency control. We will delve deeper into the practical concurrency control pitfalls in data lakes, and show how Hudi's hybrid approach combining MVCC with optimistic concurrency control, lowers contention and unlocks minute-level near real-time commits to Hudi tables. We will conclude with code examples that showcase Hudi's rich set of table services that perform vital table management such as cleaning older file versions, compaction of delta logs into base files, dynamic re-clustering for faster query performance, or the more recently introduced indexing service that maintains Hudi's multi-modal indexing capabilities.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story? This session will cover the Royal Bank of Canada’s (RBC) journey of moving away from traditional ETL batch processing with Teradata towards using the Hadoop ecosystem for ingesting data. One of the first systems to leverage this new approach was the Event Standardization Service (ESS). This service provides a centralized “client event” ingestion point for the bank’s internal systems through either a web service or text file daily batch feed. ESS allows down stream reporting applications and end users to query these centralized events.
We discuss the drivers and expected benefits of changing the existing event processing. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Speakers
Darryl Sutton, T4G, Principal Consultant
Kenneth Poon, RBC, Director, Data Engineering
Part 1: Lambda Architectures: Simplified by Apache KuduCloudera, Inc.
3 Things to Learn About:
* The concept of lambda architectures
* The Hadoop ecosystem components involved in lambda architectures
* The advantages and disadvantages of lambda architectures
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
This Hadoop tutorial on MapReduce Example ( Mapreduce Tutorial Blog Series: https://goo.gl/w0on2G ) will help you understand how to write a MapReduce program in Java. You will also get to see multiple mapreduce examples on Analytics and Testing.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Below are the topics covered in this tutorial:
1) MapReduce Way
2) Classes and Packages in MapReduce
3) Explanation of a Complete MapReduce Program
4) MapReduce Examples on Analytics
5) MapReduce Example on Testing - MRUnit
Hadoop, Pig, and Twitter (NoSQL East 2009)Kevin Weil
A talk on the use of Hadoop and Pig inside Twitter, focusing on the flexibility and simplicity of Pig, and the benefits of that for solving real-world big data problems.
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeDatabricks
Change Data Capture CDC is a typical use case in Real-Time Data Warehousing. It tracks the data change log -binlog- of a relational database [OLTP], and replay these change log timely to an external storage to do Real-Time OLAP, such as delta/kudu. To implement a robust CDC streaming pipeline, lots of factors should be concerned, such as how to ensure data accuracy , how to process OLTP source schema changed, whether it is easy to build for variety databases with less code.
Pig Tutorial | Twitter Case Study | Apache Pig Script and Commands | EdurekaEdureka!
This Edureka Pig Tutorial ( Pig Tutorial Blog Series: https://goo.gl/KPE94k ) will help you understand the concepts of Apache Pig in depth.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Below are the topics covered in this Pig Tutorial:
1) Entry of Apache Pig
2) Pig vs MapReduce
3) Twitter Case Study on Apache Pig
4) Apache Pig Architecture
5) Pig Components
6) Pig Data Model
7) Running Pig Commands and Pig Scripts (Log Analysis)
Spark Hadoop Tutorial | Spark Hadoop Example on NBA | Apache Spark Training |...Edureka!
This Edureka Spark Hadoop Tutorial will help you understand how to use Spark and Hadoop together. This Spark Hadoop tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Spark Overview
2) Hadoop Overview
3) Spark vs Hadoop
4) Why Spark Hadoop?
5) Using Hadoop With Spark
6) Use Case - Sports Analytics (NBA)
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...HostedbyConfluent
Apache Hudi is a data lake platform, that provides streaming primitives (upserts/deletes/change streams) on top of data lake storage. Hudi powers very large data lakes at Uber, Robinhood and other companies, while being pre-installed on four major cloud platforms.
Hudi supports exactly-once, near real-time data ingestion from Apache Kafka to cloud storage, which is typically used in-place of a S3/HDFS sink connector to gain transactions and mutability. While this approach is scalable and battle-tested, it can only ingest data in mini batches, leading to lower data freshness. In this talk, we introduce a Kafka Connect Sink Connector for Apache Hudi, which writes data straight into Hudi's log format, making the data immediately queryable, while Hudi's table services like indexing, compaction, clustering work behind the scenes, to further re-organize for better query performance.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Agile Data: Building Hadoop Analytics ApplicationsDataWorks Summit
Mining data requires a deep investment in people and time. How can you be sure you’re building the right models? What tools help you connect with the customer’s needs? With this hands-on presentation, you’ll learn a flexible toolset and methodology for building effective analytics applications. Agile Data (the book) shows you how to create an environment for exploring data, using lightweight tools such as Python, Apache Pig, and the D3.js (Data-Driven Documents) JavaScript library. You’ll learn an iterative approach that allows you to quickly change the kind of analysis you’re doing, as you discover what the data is telling you. All the example code in this book is available as working web applications. We will cover how to: * Build an application to mine your own email inbox * Use different data structures and algorithms to extract multiple features from a single dataset, and learn how different perspectives can yield insight * Rapidly boot your applications as simple front-ends to a document store * Add features driven by descriptive and inferential statistics, machine learning, and data visualization * Gather usage data and talk to real users to help guide your data-driven exploration
Hadoop meets Agile! - An Agile Big Data ModelUwe Printz
Big Data projects are a struggle, not only on the technical side but also on the organizational side. In this talk the author shares his experience and opinions from almost 5 years of Big Data projects and develops an Agile Big Data Model which reflects his ideas on how Big Data projects can be successful, even in large companies.
Talk held at the crossover meetup of the "Agile Stammtisch Rhein-Main" and the "Hadoop & Spark User Group Rhein-Main" at codecentric AG on 31.01.2017.
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Etha...HostedbyConfluent
Designing Apache Hudi for Incremental Processing With Vinoth Chandar and Ethan Guo | Current 2022
Back in 2016, Apache Hudi brought transactions, change capture on top of data lakes, what is today referred to as the Lakehouse architecture. In this session, we first introduce Apache Hudi and the key technology gaps it fills in the modern data architecture. Bridging traditional data lakes and warehouses, Hudi helps realize the Lakehouse vision, by bringing transactions, optimized table metadata to data lakes and powerful storage layout optimizations, moving them closer to cloud warehouses of today. Viewed from a data engineering lens, Hudi also plays a key unifying role between the batch and stream processing worlds, by acting as a columnar, server-less ""state store"" for batch jobs, ushering in what we call the incremental processing model, where batch jobs can consume new data, update/delete intermediate results in a Hudi table, instead of re-computing/re-write entire output like old-school big batch jobs.
Rest of talk focusses on a deep dive into the some of the time-tested design choices and tradeoffs in Hudi, that helps power some of the largest transactional data lakes on the planet today. We will start by describing a tour of the storage format design, including data, metadata layouts and of course Hudi's timeline, an event log that is central to implementing ACID transactions and concurrency control. We will delve deeper into the practical concurrency control pitfalls in data lakes, and show how Hudi's hybrid approach combining MVCC with optimistic concurrency control, lowers contention and unlocks minute-level near real-time commits to Hudi tables. We will conclude with code examples that showcase Hudi's rich set of table services that perform vital table management such as cleaning older file versions, compaction of delta logs into base files, dynamic re-clustering for faster query performance, or the more recently introduced indexing service that maintains Hudi's multi-modal indexing capabilities.
Data Lakehouse Symposium | Day 1 | Part 2Databricks
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story? This session will cover the Royal Bank of Canada’s (RBC) journey of moving away from traditional ETL batch processing with Teradata towards using the Hadoop ecosystem for ingesting data. One of the first systems to leverage this new approach was the Event Standardization Service (ESS). This service provides a centralized “client event” ingestion point for the bank’s internal systems through either a web service or text file daily batch feed. ESS allows down stream reporting applications and end users to query these centralized events.
We discuss the drivers and expected benefits of changing the existing event processing. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Speakers
Darryl Sutton, T4G, Principal Consultant
Kenneth Poon, RBC, Director, Data Engineering
Part 1: Lambda Architectures: Simplified by Apache KuduCloudera, Inc.
3 Things to Learn About:
* The concept of lambda architectures
* The Hadoop ecosystem components involved in lambda architectures
* The advantages and disadvantages of lambda architectures
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
This Hadoop tutorial on MapReduce Example ( Mapreduce Tutorial Blog Series: https://goo.gl/w0on2G ) will help you understand how to write a MapReduce program in Java. You will also get to see multiple mapreduce examples on Analytics and Testing.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Below are the topics covered in this tutorial:
1) MapReduce Way
2) Classes and Packages in MapReduce
3) Explanation of a Complete MapReduce Program
4) MapReduce Examples on Analytics
5) MapReduce Example on Testing - MRUnit
Hadoop, Pig, and Twitter (NoSQL East 2009)Kevin Weil
A talk on the use of Hadoop and Pig inside Twitter, focusing on the flexibility and simplicity of Pig, and the benefits of that for solving real-world big data problems.
Simplify CDC Pipeline with Spark Streaming SQL and Delta LakeDatabricks
Change Data Capture CDC is a typical use case in Real-Time Data Warehousing. It tracks the data change log -binlog- of a relational database [OLTP], and replay these change log timely to an external storage to do Real-Time OLAP, such as delta/kudu. To implement a robust CDC streaming pipeline, lots of factors should be concerned, such as how to ensure data accuracy , how to process OLTP source schema changed, whether it is easy to build for variety databases with less code.
Pig Tutorial | Twitter Case Study | Apache Pig Script and Commands | EdurekaEdureka!
This Edureka Pig Tutorial ( Pig Tutorial Blog Series: https://goo.gl/KPE94k ) will help you understand the concepts of Apache Pig in depth.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Below are the topics covered in this Pig Tutorial:
1) Entry of Apache Pig
2) Pig vs MapReduce
3) Twitter Case Study on Apache Pig
4) Apache Pig Architecture
5) Pig Components
6) Pig Data Model
7) Running Pig Commands and Pig Scripts (Log Analysis)
Spark Hadoop Tutorial | Spark Hadoop Example on NBA | Apache Spark Training |...Edureka!
This Edureka Spark Hadoop Tutorial will help you understand how to use Spark and Hadoop together. This Spark Hadoop tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) Spark Overview
2) Hadoop Overview
3) Spark vs Hadoop
4) Why Spark Hadoop?
5) Using Hadoop With Spark
6) Use Case - Sports Analytics (NBA)
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...HostedbyConfluent
Apache Hudi is a data lake platform, that provides streaming primitives (upserts/deletes/change streams) on top of data lake storage. Hudi powers very large data lakes at Uber, Robinhood and other companies, while being pre-installed on four major cloud platforms.
Hudi supports exactly-once, near real-time data ingestion from Apache Kafka to cloud storage, which is typically used in-place of a S3/HDFS sink connector to gain transactions and mutability. While this approach is scalable and battle-tested, it can only ingest data in mini batches, leading to lower data freshness. In this talk, we introduce a Kafka Connect Sink Connector for Apache Hudi, which writes data straight into Hudi's log format, making the data immediately queryable, while Hudi's table services like indexing, compaction, clustering work behind the scenes, to further re-organize for better query performance.
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Agile Data: Building Hadoop Analytics ApplicationsDataWorks Summit
Mining data requires a deep investment in people and time. How can you be sure you’re building the right models? What tools help you connect with the customer’s needs? With this hands-on presentation, you’ll learn a flexible toolset and methodology for building effective analytics applications. Agile Data (the book) shows you how to create an environment for exploring data, using lightweight tools such as Python, Apache Pig, and the D3.js (Data-Driven Documents) JavaScript library. You’ll learn an iterative approach that allows you to quickly change the kind of analysis you’re doing, as you discover what the data is telling you. All the example code in this book is available as working web applications. We will cover how to: * Build an application to mine your own email inbox * Use different data structures and algorithms to extract multiple features from a single dataset, and learn how different perspectives can yield insight * Rapidly boot your applications as simple front-ends to a document store * Add features driven by descriptive and inferential statistics, machine learning, and data visualization * Gather usage data and talk to real users to help guide your data-driven exploration
Hadoop meets Agile! - An Agile Big Data ModelUwe Printz
Big Data projects are a struggle, not only on the technical side but also on the organizational side. In this talk the author shares his experience and opinions from almost 5 years of Big Data projects and develops an Agile Big Data Model which reflects his ideas on how Big Data projects can be successful, even in large companies.
Talk held at the crossover meetup of the "Agile Stammtisch Rhein-Main" and the "Hadoop & Spark User Group Rhein-Main" at codecentric AG on 31.01.2017.
In depth presentation covers market trends and risks related to network security & big data analytics. The presentation was given by Matan Trogan at Cybertech Singapore.
There are many types of databases and data analysis tools from which to choose today. Should you use a relational database? How about a key-value store? Maybe a document database? Or is a graph database the right fit for your project? What about polyglot persistence? Help! Applying principles from Domain-Driven Design such as strategic design and bounded contexts, this presentation will help you choose and apply the right data layer for your application's model or models.
Application Developer Predictions 2017 - It's All About CognitiveIBM Watson
Watch the video recording: https://youtu.be/NmlM1SdYFFo
You’ve heard the buzz around cognitive technologies. You may have even experimented with them on your own, but you’re not sure how they fit in your app development toolbox. Sound familiar?
Don’t let another year pass without unlocking the true potential in your unstructured text, speech, images, and more. In this webinar, James Governor of Redmonk and Marcus Boone of Watson will share their predictions for what will be hot (and not) when it comes to application development in 2017. From massive data growth to conversational commerce and beyond, learn why cognitive app development is poised to accelerate in 2017 and how Watson cognitive APIs can help you successfully build your own breakthrough cognitive app.
People Analytics: A Cognitive Approach to HR: How Capgemini Leverages Watson ...Capgemini
People are the number one factor for a company's success, and it is becoming increasingly difficult to recruit and retain a highly-skilled workforce. Today, the speed at which skills change on the job and a lack of context for new skills, makes the task of manually managing competencies and job mappings difficult and expensive to keep pace. But as the need for employees with new skills keeps growing, and social networks are making it easier for companies to poach skilled workers. Learn how organizations can better adapt to evolving business.
A Practical Guide to Domain Driven Design: Presentation Slidesthinkddd
Tonight I presented on Domain Driven Design to the Alt.Net group in Sydney at the invite of Richard Banks.
As a follow up, attached are the slides I used, feel free to distribute and use on the Creative Commons Licence
Purpose: The slides provide an overview on the Cognitive Computing trend for IBM clients and external stakeholders
Content: Summary information about the Cognitive Computing trend is provided along with many links to additional resources.
How To Use This Report: This report is best read/studied and used as a learning document. You may want to view the slides in slideshow mode so you can easily follow the links
Available on Slideshare: This presentation (and other HorizonWatch Trend Reports for 2015) will be available publically on Slideshare at http://www.slideshare.net/horizonwatching
Please Note: This report is based on internal IBM analysis and is not meant to be a statement of direction by IBM nor is IBM committing to any particular technology or solution.
A apresentação tem como objetivo demonstrar uma arquitetura para solução Big Data utilizando componentes Open Source, a mesma foi apresentada no TDC 2014 em Porto Alegre.
Abordagem criativa sobre o ecossistema hadoop
No atual mundo captalista, M Bison, dono do maior e-commerce mundial chamado Shadaloo, decide analisar o perfil de todos os seus clientes; não apenas mostrando os dados comuns do BI, mas analisar também:
- Dados de TODOS os sistemas Legados
- Dados de Navegação
- SAC e Midias Sociais.
Dessa forma ele poderia:
- Criar mecanismo de ofertas personalizadas
- Retenção de clientes que realizam reclamações no SAC
- Identificar relação de entre reclamações no SAC e mídias sociais.
- Analisar fluxo de navegação e proporcionar navegação personalizada por tipo de clientes
Texto sobre Big Data extraído do blog bigdatabrazil.blogspot.com com informações básicas sobre hadoop, mapreduce, hdfs e hive. Contém indicações de livros e links que detalham o assunto.
Essa apresentação foi exibida na Semana Acadêmica 2014 da UFSM (SAINF 2014). Ela explica o funcionamento do BigData, qual o papel do Apache Hadoop, e como os dados são coletados e analisados pelas grandes companhias de mineração de dados.
Palestra sobre Big Data e o ecossitema hadoop, com seus conceitos e suas ferramentas, incluindo trilhas de aprendizagem e algumas certificações ministrada online no Canal Coders In Rio: https://www.youtube.com/watch?v=-pCwSkNoRY4&t=1s
Material para seminário com abordagem sobre NoSQL apresentada para avaliação da matéria de Banco de Dados II da Universidade de Vila Velha.
Apresentação: https://www.slideshare.net/lorran33/seminrio-nosql
Alunos: Iago Binow, Lorran Pegoretti, Luiz Marcon e Pedro Malta
Universidade de VIia Velha.
Data Science, Big Data e Analytics são termos que escutamos constantemente hoje em dia. Mais do que buzzwords elas estão guiando o modo como empresas de diferentes de tamanhos pensam e evoluem seus modelos de negócio.
Vamos desmistificar alguns desses conceitos e mostrar como podemos começar a aplicar algumas dessas técnicas em nossos projetos. E, sendo uma das mais usadas linguagens para análise de dados, veremos como Python pode nos ajudar nessa jornada.
46. QUAL O NOSSO DESAFIO?
Dados contidos em múltiplos sistemas
Que são frequentemente armazenados em
diferentes formatos
E idealmente ter uma fonte da verdade, de
onde derivar os dados
46
47. Pense em um datamart como uma loja de garrafas
de água: limpa, embalada e organizada para fácil
consumo; o lago de dados é um grande corpo de
água em um estado mais natural.
O conteúdo do lago de dados flui de uma fonte
para preencher o lago, e vários usuários podem
vir examinar, mergulhar ou pegar amostras.
James Dixon, CTO of Pentaho
47
48. An Enterprise Data Lake is an immutable data store of
largely un-processed “raw” data, acting as a source for other
processing streams but also made directly available to a
significant number of internal, technical consumers using some
efficient processing engine. Examples include HDFS or HBase
within a Hadoop, Spark or Storm processing framework. We
can contrast this with a typical system that collects raw data
into some highly restricted space that is only made available to
these consumers as the end result of a highly controlled ETL
process.
ThoughtWorks Tech Radar
48
49. PROPRIEDADES DE DATA LAKES
A ingestão dos dados deve ser "push based", ou seja, os
dados devem ser "empurrados" para o sistema ao invés
de serem ingeridos periodicamente através de
processamentos em lote
Os dados ingeridos devem ser armazenados na sua
forma mais pura
A solução deve ser escalável horizontalmente, em termos
de capacidade de armazenamento e processamento
Não serve ao usuário final, mas sim a usuários técnicos
49
50. OBJETIVOS DO DATA LAKE
Reduzir o custo da ingestão de novos tipos de dados
Diminuir o tempo que leva para que atualizações nos
sistemas operacionais cheguem até os sistemas
analíticos
Permitir o processamento de volumes de dados bem
maiores que os sistemas de DW tradicionais
50
51. OBJETIVOS DO DATA LAKE
Eliminar gargalos devido à falta de desenvolvedores
especializados em ETL ou à excessivo up front
design do modelo de dados
Empoderar desenvolvedores a criarem seus
próprios pipelines de processamento de dados de
uma maneira ágil — quando for preciso e da forma
que for preciso — dentro de limites razoáveis
51
61. ||-ISMO DE DADOS
Foco em distribuir os dados através de diferentes
nós de computação paralela
Cada processador executa a mesma tarefa em
diferentes fatias de dados distribuídos
Enfatiza a natureza distribuída dos dados, em
oposição ao processamento
61
62. EXEMPLO
define foo(array d)
if CPU = "a"
lower_limit := 1
upper_limit := round(d.length/2)
else if CPU = "b"
lower_limit := round(d.length/2) + 1
upper_limit := d.length
for i from lower_limit to upper_limit by 1
do_something_with(d[i])
end
62
63. EXEMPLO
define foo(array d)
if CPU = "a"
lower_limit := 1
upper_limit := round(d.length/2)
else if CPU = "b"
lower_limit := round(d.length/2) + 1
upper_limit := d.length
for i from lower_limit to upper_limit by 1
do_something_with(d[i])
end
63
Acoplamento do código ao número de CPUs da máquina
Você precisa se preocupar em como dividir os
dados através dos diferentes nós de computação
Você precisa se preocupar explicitamente em como
acumular e consolidar a saída final a partir das
computações em paralelo
64. COMO UM DESENVOLVEDOR…
Eu quero escrever meu código de tal maneira
que ele possa ser executado em paralelo
Eu não quero escrever código baseado em quantas
CPUs/máquinas tenho disponíveis no data center
Eu não quero me preocupar em como os dados de
entrada devem ser divididos para a execução em
paralelo
Eu não quero me preocupar em como a saída final
deve ser acumulada e consolidada a partir das
unidades rodando em paralelo
64
69. HADOOP
Framework para armazenamento e computação
distribuída para processamento de dados em larga
escala
Não impõe restrição aos formatos dos dados sendo
processados
Projeto da Apache Software Foundation
Implementado em Java, suportado em todas as
plataformas *nix, Windows
Objetivo: computação/armazenamento linearmente
escalável usando hardware comum
69
71. HDFS
Sistema de arquivos distribuído
Cada arquivo pode estar espalhado por múltiplos nós
Clientes podem acessar arquivos de qualquer nó, como se fosse local
Tolerância a falhas e alta disponibilidade
APIs
Java/Scala
Shell HDFS suporta vários comandos
Interface web para navegar pelo sistema de arquivos
71
72. HDFS: PRINCIPAIS CONCEITOS
Sistema de arquivos hierárquico - similar a Unix/Linux
Metadados de arquivos e diretórios
nome, owner, group owner, permissões, status
Arquivos divididos em blocos, que são distribuídos
72
74. PARA QUE CENÁRIOS HDFS NÃO É TÃO BOM ASSIM?
Aplicações de baixa latência
Muitos arquivos pequenos
Acesso aleatório
Updates de dados
Algoritmos iterativos
74
80. PONTOS DE DOR COM MAPREDUCE
Latência
Limitado a fases de Map e Reduce
Não é trivial testar…
Pode resultar em fluxos complexos
Reuso de dados requer escrita no HDFS
80
81. O QUE É APACHE SPARK?
Cluster Computing Engine
Abstrai o armazenamento e gerenciamento do cluster
Interface de dados unificada
Modelo de programação fácil
API em Scala, Python, Java, R
81
86. EXEMPLO DE CONTADOR DE PALAVRAS
val file = sparkContext.textFile("input path")
val counts = file.flatMap(line => line.split(" "))
.map(word => (word, 1))
.reduceByKey((a, b) => a + b)
counts.saveAsTextFile("destination path")
86
88. RDD: RESILIENT DISTRIBUTED DATASET
Coleção de objetos somente leitura
Particionados através de um conjunto de
máquinas
Podem ser reconstruídos caso uma das partições
seja perdida
Pode ser reutilizado
Pode ser cacheado em memória
88
89. RDD: RESILIENT DISTRIBUTED DATASET
Lazily evaluated
Proporciona um reuso de dados eficiente
Várias operações para processamento de dados
89
90. Conjunto de partições (“splits”)
+
Lista de dependências de outros RDDs
+
Função para computar uma partição, dadas
suas dependências
90
INTERFACE DOS RDDS
91. Transformações
Retorna um novo RDD com a transformação aplicada
Lazy
Podem ser encadeadas
Ações
Executam o DAG de transformações
91
OPERAÇÕES COM RDDS
96. SHUFFLE NÃO É OBRIGATÓRIO
Programas não ficam limitados a fases de
map e reduce
Shuffle e sort não são mais obrigatórios
entre fases
96
97. IO REDUZIDO
Não é necessário IO de disco entre fases,
devido ao pipeline de operações
Não há IO de rede a não ser que um shuffle
seja necessário
97
98. CACHEAMENTO DE DADOS EM MEMÓRIA
Cache opcional em memória
Engine do DAG pode aplicar otimizações, já que
quando uma ação é chamada ele sabe todas as
transformações a aplicar
98