SlideShare uma empresa Scribd logo
1 de 28
1
The Future of Apache Hadoop Security
Joey Echeverria, Chief Architect of Public Sector
©2014 Cloudera, Inc. All rights reserved.
©2014 Cloudera, Inc. All rights reserved.13
Hadoop | had(y)ōōp |
noun
a system for executing arbitrary binaries over
arbitrary, often large datasets: we used Hadoop
to count an exabyte of words.
©2014 Cloudera, Inc. All rights reserved.27
Joey Echeverria
joey@cloudera.com
@fwiffo
28

Mais conteúdo relacionado

Destaque

빅데이터 플랫폼 새로운 미래
빅데이터 플랫폼 새로운 미래빅데이터 플랫폼 새로운 미래
빅데이터 플랫폼 새로운 미래
Wooseung Kim
 
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Channy Yun
 

Destaque (17)

Open Source Reporting Tool Comparison
Open Source Reporting Tool ComparisonOpen Source Reporting Tool Comparison
Open Source Reporting Tool Comparison
 
Data Virtualization Reference Architectures: Correctly Architecting your Solu...
Data Virtualization Reference Architectures: Correctly Architecting your Solu...Data Virtualization Reference Architectures: Correctly Architecting your Solu...
Data Virtualization Reference Architectures: Correctly Architecting your Solu...
 
Building Data Integration and Transformations using Pentaho
Building Data Integration and Transformations using PentahoBuilding Data Integration and Transformations using Pentaho
Building Data Integration and Transformations using Pentaho
 
Introduction to sentry
Introduction to sentryIntroduction to sentry
Introduction to sentry
 
Supporting Data Services Marketplace using Data Virtualization
Supporting Data Services Marketplace using Data VirtualizationSupporting Data Services Marketplace using Data Virtualization
Supporting Data Services Marketplace using Data Virtualization
 
TeraStream for ETL
TeraStream for ETLTeraStream for ETL
TeraStream for ETL
 
Apache Sentry for Hadoop security
Apache Sentry for Hadoop securityApache Sentry for Hadoop security
Apache Sentry for Hadoop security
 
What's new in SQL on Hadoop and Beyond
What's new in SQL on Hadoop and BeyondWhat's new in SQL on Hadoop and Beyond
What's new in SQL on Hadoop and Beyond
 
Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...
Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...
Designing an Agile Fast Data Architecture for Big Data Ecosystem using Logica...
 
빅데이터 플랫폼 새로운 미래
빅데이터 플랫폼 새로운 미래빅데이터 플랫폼 새로운 미래
빅데이터 플랫폼 새로운 미래
 
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
Daum 내부 빅데이터 및 클라우드 기술 활용 사례- 윤석찬 (2012)
 
Logical Data Warehouse and Data Lakes
Logical Data Warehouse and Data Lakes Logical Data Warehouse and Data Lakes
Logical Data Warehouse and Data Lakes
 
빅데이터 기술 현황과 시장 전망(2014)
빅데이터 기술 현황과 시장 전망(2014)빅데이터 기술 현황과 시장 전망(2014)
빅데이터 기술 현황과 시장 전망(2014)
 
Informatica Pentaho Etl Tools Comparison
Informatica Pentaho Etl Tools ComparisonInformatica Pentaho Etl Tools Comparison
Informatica Pentaho Etl Tools Comparison
 
Big Data Industry Insights 2015
Big Data Industry Insights 2015 Big Data Industry Insights 2015
Big Data Industry Insights 2015
 
Big Data Security and Governance
Big Data Security and GovernanceBig Data Security and Governance
Big Data Security and Governance
 
Real-time Analytics in Financial: Use Case, Architecture and Challenges
Real-time Analytics in Financial: Use Case, Architecture and ChallengesReal-time Analytics in Financial: Use Case, Architecture and Challenges
Real-time Analytics in Financial: Use Case, Architecture and Challenges
 

Mais de Joey Echeverria

Debugging Apache Spark
Debugging Apache SparkDebugging Apache Spark
Debugging Apache Spark
Joey Echeverria
 
Apache Accumulo and Cloudera
Apache Accumulo and ClouderaApache Accumulo and Cloudera
Apache Accumulo and Cloudera
Joey Echeverria
 
Analyzing twitter data with hadoop
Analyzing twitter data with hadoopAnalyzing twitter data with hadoop
Analyzing twitter data with hadoop
Joey Echeverria
 
Hadoop in three use cases
Hadoop in three use casesHadoop in three use cases
Hadoop in three use cases
Joey Echeverria
 
Scratching your own itch
Scratching your own itchScratching your own itch
Scratching your own itch
Joey Echeverria
 
The power of hadoop in cloud computing
The power of hadoop in cloud computingThe power of hadoop in cloud computing
The power of hadoop in cloud computing
Joey Echeverria
 
Hadoop and h base in the real world
Hadoop and h base in the real worldHadoop and h base in the real world
Hadoop and h base in the real world
Joey Echeverria
 

Mais de Joey Echeverria (12)

Debugging Apache Spark
Debugging Apache SparkDebugging Apache Spark
Debugging Apache Spark
 
Building production spark streaming applications
Building production spark streaming applicationsBuilding production spark streaming applications
Building production spark streaming applications
 
Streaming ETL for All
Streaming ETL for AllStreaming ETL for All
Streaming ETL for All
 
The Future of Apache Hadoop Security
The Future of Apache Hadoop SecurityThe Future of Apache Hadoop Security
The Future of Apache Hadoop Security
 
Building data pipelines with kite
Building data pipelines with kiteBuilding data pipelines with kite
Building data pipelines with kite
 
Apache Accumulo and Cloudera
Apache Accumulo and ClouderaApache Accumulo and Cloudera
Apache Accumulo and Cloudera
 
Analyzing twitter data with hadoop
Analyzing twitter data with hadoopAnalyzing twitter data with hadoop
Analyzing twitter data with hadoop
 
Big data security
Big data securityBig data security
Big data security
 
Hadoop in three use cases
Hadoop in three use casesHadoop in three use cases
Hadoop in three use cases
 
Scratching your own itch
Scratching your own itchScratching your own itch
Scratching your own itch
 
The power of hadoop in cloud computing
The power of hadoop in cloud computingThe power of hadoop in cloud computing
The power of hadoop in cloud computing
 
Hadoop and h base in the real world
Hadoop and h base in the real worldHadoop and h base in the real world
Hadoop and h base in the real world
 

Último

Último (20)

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

The Future of Apache Hadoop Security

  • 1. 1 The Future of Apache Hadoop Security Joey Echeverria, Chief Architect of Public Sector ©2014 Cloudera, Inc. All rights reserved.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13. ©2014 Cloudera, Inc. All rights reserved.13 Hadoop | had(y)ōōp | noun a system for executing arbitrary binaries over arbitrary, often large datasets: we used Hadoop to count an exabyte of words.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27. ©2014 Cloudera, Inc. All rights reserved.27 Joey Echeverria joey@cloudera.com @fwiffo
  • 28. 28

Notas do Editor

  1. You probably came to hear me talk about security, but security is boring. So instead, I’m going to talk about hungry hungry hippos. If you haven’t played hungry hungry hippos before, it’s a fairly simple game where four players compete to collect the most marbles. Now hungry hungry hippos is usually played with all white marbles. This makes sense because all players can collect all marbles. But I’m going to change the rules. Image source: https://www.flickr.com/photos/carbonnyc/3234684182
  2. BOOM, multiple colors. That’s more like it. Now that I have all this great variety, I want to restrict which hippos can consume which colors of marbles. Why do I want this new rule? Well, it doesn’t matter because I’m the one giving the talk so I get to make the rules. Image source: https://www.flickr.com/photos/andrewmorrell/55268996
  3. Although seriously, do you want any of these yahoos to be able to collect any color of marble? Take the guy in the green, I never trust a man with a beard. Now that we’ve established that we want to limit access to certain color marbles, let’s brainstorm a couple of different ways to implement this. Image source: https://www.flickr.com/photos/timefortea3/12938376425/
  4. Lets start by sorting the marbles into groups with the same color. We can then control access to these groups of marbles by creating magical boundaries that only specific hippos can penetrate. This system works well, but it’s not very granular. We have to pre-group the marbles that each hippo can access into it’s own magic box. Image source: source: https://www.flickr.com/photos/stevendepolo/5601377451
  5. If some magic is good, more magic is even better! I’d much rather have magic marbles and magic hippos. Instead of having to first put the same colored marbles into the same magic box, now I can mix all the marbles together. Thanks to magic, each hippo will only grab the colors that they’re allowed. Any other marbles will pass right through them. This saves me a lot of time and it also allows me to invite more players. Now anyone can play and they’ll only ever collect the right kind of marble. Image source: https://www.flickr.com/photos/jdhancock/7082879485
  6. This is convenient because I’m very lazy. I’d hate to have to pre-sort things. It’s much easier for me to grab a handful of marbles and just throw them on the board and let the magic sort it all out. Image source: https://www.flickr.com/photos/tambako/633374069
  7. At this point you may be asking what on earth does any of this have to do with Hadoop? Well, you might be asking that assuming you didn’t just walk out while I was up here rambling about hippos, marbles, and magic. If you think about how the usage of Hadoop has evolved, it started very much the same way as our hungry hungry hippos game. All of the marbles were the same color and any player could collect any marble. This was great when we were deploying Hadoop for a small set of users and we trusted every user with all of the data. But when something is useful, it inevitably leads to more adoption. Image source: https://www.flickr.com/photos/secretlondon/4582476286
  8. As more and more people show up to use our cluster, we have to think more and more carefully about who has access to what data. Before Hadoop had strong security controls, they implemented advisory authorization at the file and directory level. I say advisory because while permissions existed, Hadoop initially didn’t require that you prove who you say you are. This helped you prevent mistakes, but didn’t stop malicious users. Image source: https://www.flickr.com/photos/scott-s_photos/12712204375
  9. Hadoop solved this problem by adding support for Kerberos-based authentication. Now each user was given strong credentials that they could use to gain access to the system. Kerberos has become so synonymous with Hadoop security that 90% of the time if someone says they configured or enabled hadoop security, they’re probably talking about turning on Kerberos authentication. Image source: https://www.flickr.com/photos/16048742@N08/3458184491
  10. This is great. You could now check that each user was who they said they were. You still have a bit of a problem in that permissions only exist at the file level. That means that if I want control access to particular types of data, I have to merge all of the protected data into their own files and set permissions accordingly. This is especially annoying if I’m using a query language like Impala, Hive or Pig. I’m now trying access tables of records but I have to manage my security controls with very blunt tools. Image source: https://www.flickr.com/photos/ballena/4167217995
  11. Before I talk about how we’ve made progress on the granularity problem, I want to take a minute to talk about how Hadoop, and in particular MapReduce, implements process-based isolation. Before Hadoop had security, every job was executed as the hadoop user. This meant that even though you accessed files by default using the user identity that submitted the job, there was nothing that prevented you from peaking over your shoulder and looking at some of the output from another job since all of the intermediate data was protected by OS permissions and all jobs ran as the same OS user. Hadoop solved this problem by adding the ability to su to the user that submitted the job before executing the job process. This is very useful from a security perspective, but I bring it up because it’s something that often trips up new administrators that are deploying Hadoop for the first time. TLDR; you must provision user accounts on every node in your cluster for any user that can run a MapReduce job. This is often done using LDAP or Active Directory so you don’t have to manage all the accounts by hand. Image source: https://www.flickr.com/photos/mkamp/2429091134
  12. At it’s core, Hadoop is a system for executing arbitrary code over arbitrary data. Let me say that one more time. Arbitrary code running over arbitrary data. This is why Hadoop security is tougher than most other systems. The system starts with the ability to just run random code, you need to set up multiple barriers of protection before you have a fully secured system.
  13. Why do we care about controlling data access with finer granularity? It all comes down to multitenancy. It’s cool to have a 100 node Hadoop cluster that serves all of the users in your department, but it’s even cooler to have a 1000 node Hadoop cluster that serves all of the users in your company. Because we want to share these large clusters to get good economies of scale, we need to come up with more creative ways to control access to data. Again. we could keep sorting data into files and directories and implementing all of our controls at the file-level, but that gets old fast. Image source: https://www.flickr.com/photos/erozkosz/6003136440
  14. One of the first efforts towards adding fine grained access control was a project called Apache Accumulo. Accumulo is similar to HBase in that it’s also based on Google’s BigTable design. One of the places Accumulo departed from the source material was to add an additional element to the key that provides a security label at the cell-level. This is very useful as it provides very fine-grain access, unfortunately the scanning speed of Accumulo and HBase isn’t as fast as scanning HDFS directly so they’re not as ideal for large batch workloads (hint: that’s what MapReduce was designed for). HBase initially added security at the table and column level, but they saw how much fun Accumulo was having that they added a cell-level option in the latest release. Image source: https://www.flickr.com/photos/28096801@N05/4332863749
  15. Not wanting to be left out of the party, the community has done some work to prove fine-grain access control to data stored in HDFS. The two most popular ways of doing that today is with Apache Sentry (Incubating) and Apache Hive’s new, next generation authorization features . Sentry works by plugging into existing projects and adding RBAC from the outside. This is nice because it gives you a common way of controlling access to data across the different file format and processing engines. Today, Sentry supports Hive, Impala and Apache Solr. Sentry only provides access control down to the view level, but you can simulate column and even row-level access by creating views that expose a subset of columns or that filter rows. The downside to these methods is that today, you only get the access control if you’re accessing your data through one of the supported engines. You can’t have granular access through both Hive and regular MapReduce. Image source: https://www.flickr.com/photos/47217301@N06/5175262042
  16. Because setting up Hadoop security is still somewhat complex, a recent trend has been to try and get all of your security at the perimeter. This isn’t necessarily a new idea. Folks have been using Hue for a number of years to provide limited access at the boundary. What’s different is that more and more users are looking for perimeter controlled API access, not just a user-facing GUI. The most popular dedicated project in this area is Apache Knox. Knox is nice in that it lets you grant access to select users through a proxy service. The downside is that Knox implements it’s own REST APIs so you can’t just take a standard Hadoop client and point it at Knox. The other limit of perimeter security is that if you’re allowed to upload jar files and submit jobs, then you still need all of the other security features enabled to prevent jobs from running amok. Image source: https://www.flickr.com/photos/sidibousaid/6985757255
  17. This brings me to the topic of trust. By default, all of that data in Hadoop is stored in the clear. That means that you have to trust the system administrators that have root access to the cluster to not go poking around in data they shouldn’t. You also trust that your network is secure and that malicious users can capture or sniff traffic that wasn’t meant for them. These assumptions are fine for a large number of users, but they don’t satisfy the most paranoid among us. Image source: https://www.flickr.com/photos/powerbooktrance/466709245
  18. For these users, Hadoop supports encryption in a number of different places. Today, it is largely available for data and metadata that goes over the wire. You can encrypt Hadoop’s RPC protocol, the data block streaming protocol, and the MapReduce shuffle. This encryption is implemented with SASL for RPC and block streaming which limits the encryption codec options to a certain degree. Shuffle encryption is implemented with SSL and so it supports what ever cipher suites are available in Java’s SSL implementation. Image source: https://www.flickr.com/photos/greeblemonkey/2689378015
  19. Over the wire is great and all, but what about disk-based encryption? Today, Hadoop doesn’t support native disk-level encryption. Historically, users that needed to encrypt their data on disk used third party tools that would encrypt the data volumes. HDFS-6134 is ongoing upstream to add native encryption at rest to HDFS and should come out in a future release. You may have also seen the announcement that Cloudera has bought Gazzang and we’re supporting their Hadoop encryption solution on our stack. When HDFS-6134 is merged in, you’ll be able to do block-level encryption on HDFS for a pre-specified list of files and directories. The coolest part is the the key management is pluggable so you’ll be able to use whichever system best meets your needs. Pluggable, scalable encryption for all your big data needs. Yes please. Image source: https://www.flickr.com/photos/kubina/326629513
  20. So, what does the future hold? Well, if you’ve been following along you see a number of themes that pushing Hadoop’s security boundaries. Hadoop clusters are getting larger and more diverse user bases. Increasingly, deployments are no longer trusting the network or administrators that are running your cluster. Folks are accessing ever increasing volumes of data and with more and more diverse processing engines. Image source: https://www.flickr.com/photos/pasukaru76/3998273279
  21. This brings us back to granularity. HBase has already added cell-level security, will HDFS follow suite? Obviously it’s harder for a file system to protect objects that are contained with-in the files themselves, so more creative solutions will likely be necessary. HDFS is already increasing the flexibility for protecting files and directories. Hadoop 2.4.0 added support for file system access control lists (ACLs). This means you’re no longer limited by the POSIX-style permissions that have been around since Hadoop 0.16. Image source: https://www.flickr.com/photos/dnet/5921138809
  22. I’m personally excited for the future of Sentry. Sentry already supports more projects than any of the other fine grain access control solutions out there. You can expect to see it integrate with more and more processing engines in the future. There is already extensive work being done to add a DB-based backend to Sentry (SENTRY-37) which will be more sustainable than the configuration file used today. That work will also enable the ability to use SQL-based GRANT/REVOKE commands to update permissions and roles. In keeping with the granularity theme, there is also a proposal to add true column-level access control to Sentry. This will eliminate the need to create views in order to simulate column-level access. But what I’m most excited about are plans to add a Sentry Record Service on top of HDFS. This would abstract away the access to files on the file system with a distributed services for accessing records. This service will be accessible from any processing engine and any level of security, including cell-level, could be implemented. The design work is still early and subject to change, but APIs are planned to be fully layered so you could add new security operations like transparent encryption directly into the service. This is going to be huge. Image source: https://www.flickr.com/photos/24874528@N04/6863197796
  23. I already spoke a bit about what encryption features are available today and what’s in the pipeline. A big proponent of the additional encryption technologies is Intel through it’s Project Rhino initiative. Through Rhino, a number of encryption and other security related enhancements have been proposed and worked in upstream Hadoop. One of the biggest benefits to the work being completed under Rhino is support for accelerated encryption codecs when running on certain Intel chips. This will enable large scale deployments of encryption without huge performance penalties. For it’s part, Cloudera is doubling down on Rhino and investing heavily in the security of Apache Hadoop. Image source: https://www.flickr.com/photos/gwegner/8247098628
  24. I want to leave you with this final thought. All of the work that has been done to add security to Hadoop and all of the work that’s coming down the pipeline is there to enhance your ability to share data. This is the key idea behind Hadoop. You can argue about which processing engine to use or even which file system, but Hadoop as an idea is bigger than all of them. Hadoop, for the first time, gives us a platform where you can store all of your data, access it in whatever way makes the most sense, and can do so while sharing huge clusters among thousands of users. Security is really at the heart of this. All of the great capabilities of Hadoop are for naught if it won’t let you securely share and access your data. Image source: https://www.flickr.com/photos/ben_grey/4582294721
  25. So, play a game and have some fun. Image source: https://www.flickr.com/photos/davef3138/2685563950