Scalding is a scala DSL for Cascading. Run on Hadoop, it’s a concise, functional, and very efficient way to build big data applications. One significant benefit of Scalding is that it allows easy porting of Scalding apps from MapReduce to newer, faster execution fabrics.
In this webinar, Cyrille Chépélov, of Transparency Rights Management, will share how his organization boosted the performance of their Scalding apps by over 50% by moving away from MapReduce to Cascading 3.0 on Apache Tez. Dhruv Kumar, Hortonworks Partner Solution Engineer, will then explain how you can interact with data on HDP using Scala and leverage Scala as a programming language to develop Big Data applications.
61. Next Steps…
Download the Hortonworks Sandbox
Learn Hadoop
Build Your Analytic App
Try Hadoop 2
More about Concurrent & Hortonworks
http://hortonworks.com/partner/concurrent
More about Transparency Rights Management
http://www.transparencyrights.com/
Contact us: events@hortonworks.com
Hortonworks has a singular focus - enabling Apache Hadoop as an enterprise data platform for any app and any data type
We were founded in 2011 by 24 developers from Yahoo where Hadoop was conceived to address data challenges at internet scale. What we now know of as Hadoop really started in 2005, when a team at Yahoo was directed to build out a large-scale data storage and processing technology that would allow them to improve their most critical application, Search.
Their challenge was essentially two-fold. First they needed to capture and archive the contents of the internet, and then process the data so that users could search through it effectively an efficiently. Clearly traditional approaches were both technically (due to the size of the data) and commercially (due to the cost) impractical. The result was the Apache Hadoop project that delivered large scale storage (HDFS) and processing (MapReduce).
Today we are over 600 employees and have partnered with over 900 companies who are the leaders in the data center
We have also been very fortunate to achieve very significant customer adoption with over 230 customers as of Q3 2014, spanning nearly every vertical.
Hortonworks was founded the sole intent to make Hadoop an enterprise data platform. With YARN as its foundation, HDP delivers a centralized architecture with true multi-tenancy for data-processing and shared services for Security, Governance and Operations to satisfy enterprise requirements, all deeply integrated and certified with leading datacenter technologies.
We are uniquely focused on this transformation of Hadoop and doing our work completely in open source. This is all predicated on our leadership in the community, which enables not only to best support users of but also provides uniquely present customer requirements within this open, thriving community.
Before we dive into Hadoop and its role within the modern data architecture, let’s set the context for why Hadoop has become important.
Existing approaches for data management have become both technically and commercially impractical.
Technically - these systems were never designed to store or process vast quantities of data
Commercially – the licensing structures with the traditonal approach are no longer feasible.
These two challenges combined with rate at which data is being produce predicated a need for a new approach to data systems. If we fast-forward another 3 to 5 years, more than half of the data under management within the enterprise will be from these new data sources.
Enter Hadoop.
Faced with this challenge the team at yahoo conceived and created apache hadoop to address the challenge. They then were convinced that contribution of this platform into an open community would speed innovation. They open sourced the technology and did so within the governance of the Apache Software Foundation. (ASF) This introduced two distinct significant advantages.
Not only could they manage new data types at scale but the now had a commercially feasible approach.
However, there will still significant challenges. The first generation of Hadoop was:
- designed and optimized for Batch only workloads,
- it required dedicated clusters for each application, and,
- it didn’t integrate easily with many of the existing technologies present in the data center.
Also, like any emerging technology, Hadoop was required to meet a certain level of readiness required by the enterprise.
After running Hadoop at scale at yahoo, the team spun out to form Hortonworks with the intent to address these challenges and make Hadoop enterprise ready.
In 2011, Hortonworks was founded with the 24 original Hadoop architects and engineers from Yahoo!
This original team had been working on a technology called YARN (Yet Another Resource Negotiator) that enable multiple applications to have access to all your enterprise data through an efficient centralized platform. It is the data operating system for hadoop that provides the versatility to handle any application and dataset no matter the size or type.
Moreover, YARN provided the centralized architecture around which the critical enterprise services of Security, Operations, and Governance could be centrally addressed and integrate with existing enterprise policies.
This work allowed for a new approach to data to emerge, the modern data architecture. At the heart of this approach is the capability for Hadoop to unify data and processing in an efficient data platform
Meet Jane. Jane loves music.
And Jane’s favourite music video platform has all the music Jane loves.
So Jane listens to music from the Platform.
After october 2013: went on different things, the topic was left in storage for a while
September 2014: new model, same concept; built on plain Cascading to simplify some of the hairiest SQL logic (Optiq lacks(ed) analytic functions, so the pretty much single SQL statement from SQL Server days had to be exploded into the 12 stages)
Met guys from Lausanne at the end of September. Was already curious about Scala / Scalding then, decided to spend two days to give it a spin.
Never turned back !
TEZ 0.6.2-SNAPSHOT is required, as
Warning: TEZ 0.7 runtime is not API-compatible with 0.6 (altough the source-level API is quite close). Cascading might change the Tez dependency from time to time…
The typical Hadoop+Tez stacks pulls in a Jetty, a Tomcat, a Jersey, multiple guavas, and the kitchen sink.
We believe our workload requires 270-ish MiB of native memory. When we have time, we’ll either power down for extra sticks of RAM, or attempt to shave 20 MiB of heap per TezChild.
(reportedly)
Hash joins means hash joins, but also .filter/mapWithValue, joinWithTiny, etc.
Hash joins means hash joins, but also .filter/mapWithValue, joinWithTiny, etc.
Who wants to see another « Word Count » ?
Who wants to see another « Word Count » ?
Who wants to see another « Word Count » ?
I’m not going to look into that, fairly standard code except where I’ve been naïve. You get the idea.