SlideShare a Scribd company logo
1 of 41
Download to read offline
Online Analytics with
Hadoop and Cassandra
      Robbie Strickland
who am i?


Robbie Strickland
Managing Architect, E3 Greentech

rostrickland@gmail.com
linkedin.com/in/robbiestrickland
github.com/rstrickland
in scope
●   offline vs online analytics
●   Cassandra overview
●   Hadoop-Cassandra integration
●   writing map/reduce against Cassandra
●   Scala map/reduce
●   Pig with Cassandra
●   questions
common hadoop usage pattern
● batch analysis of unstructured/semi-structured data
● analysis of transactional data requires moving it into
    HDFS or offline HBase cluster
●   use Sqoop, Flume, or some other ETL process to
    aggregate and pull in data from external sources
●   well-suited for high-latency offline analysis
the online use case
● need to analyze live data from a transactional DB
● too much data or too little time to efficiently move it
    offline before analysis
●   need to immediately feed results back into live system
●   reduce complexity/improve maintainability of M/R jobs
●   run Pig scripts against live data (i.e. data exploration)

Cassandra + Hadoop makes this possible!
what is cassandra?
●   open-source NoSQL column store
●   distributed hash table
●   tunable consistency
●   dynamic linear scalability
●   masterless, peer-to-peer architecture
●   highly fault tolerant
●   extremely low latency reads/writes
●   keeps active data set in RAM -- tunable
●   all writes are sequential
●   automatic replication
cassandra data model
●   keyspace roughly == database
●   column family roughly == table
●   each row has a key
●   each row is a collection of columns or supercolumns
●   a column is a key/value pair
●   a supercolumn is a collection of columns
●   column names/quantity do not have to be the same
    across rows in the same column family
●   keys & column names are often composite
●   supports secondary indexes with low cardinality
cassandra data model
keyspace                  keyspace
 standard_column_family    super_column_family
  key1                      key1
    column1 : value1         supercol1
    column2 : value2          column1 : value1
  key2                        column2 : value2
    column1 : value1         supercol2
    column3 : value3          column3 : value3
    column4 : value4        key2
                             supercol3
                              column4 : value4
                             supercol4
                              column5 : value5
let's have a look ...
cassandra query model
● get by key(s)
● range slice - from key A to key B in sort order
    (partitioner choice matters here)
●   column slice - from column A to column B ordered
    according to specified comparator type
●   indexed slice - all rows that match a predicate
    (requires a secondary index to be in place for the
    column in the predicate)
●   "write it like you intend to read it"
●   Cassandra Query Language (CQL)
cassandra topology

                         Node 1

                        Start token
                             0




           Node 2                       Node 3

          Start token                  Start token
           2^127 / 3                  2 * 2^127 / 3
cassandra client access
● uses Thrift as the underlying protocol with a number of
    language bindings
●   CLI is useful for simple CRUD and schema changes
●   high-level clients available for most common
    languages -- this is the recommended approach
●   CQL supported by some clients
cassandra with hadoop
● supports input via ColumnFamilyInputFormat
● supports output via ColumnFamilyOutputFormat
● HDFS still used by Hadoop unless you use DataStax
    Enterprise version
●   supports Pig input/output via CassandraStorage
●   preserves data locality
●   use 2 "data centers" -- one for transactional and the
    other for analytics
●   replication between data centers is automatic and
    immediate
cassandra with hadoop - topology

                                                            Node 4

               Node 1                                      Start token             Hadoop
                                                             N1 + 1               NameNode
              Start token                                                         JobTracker
                   0                                       DataNode
                                                          TaskTracker
                                                             Replica
                Replica                     Replication

                                                            Node 5                  Node 6
 Node 2                       Node 3
                                                          Start token              Start token
Start token                  Start token                    N2 + 1                   N3 + 1
 2^127 / 3                  2 * 2^127 / 3
                                                           DataNode                DataNode
                               Replica                    TaskTracker             TaskTracker



    Data Center 1 - Transactional                               Data Center 2 - Analytics
configuring transactional nodes
● install Cassandra 1.1
● config properties are in cassandra.yaml (in
    /etc/cassandra or $CASSANDRA_HOME/conf) and in
    create keyspace options
●   placement strategy = NetworkTopologyStrategy
●   strategy options = DC1:2, DC2:1
●   endpoint snitch = PropertyFileSnitch -- put DC/rack
    specs in cassandra-topology.properties
●   designate at least one node as seed(s)
●   make sure listen & rpc addresses are correct
●   use nodetool -h <host> ring to verify everyone is talking
configuring analytics nodes
●   configure Cassandra using transactional instructions
●   insure there are no token collisions
●   install Hadoop 1.0.1+ TaskTracker & DataNode
●   add Cassandra jars & dependencies to Hadoop
    classpath -- either copy them to $HADOOP_HOME/lib
    or add the path in hadoop-env.sh
●   Cassandra jars are typically in /usr/share/cassandra,
    and dependencies in /usr/share/cassandra/lib if
    installed using a package manager
●   you may need to increase Cassandra RPC timeout
let's have a look ...
writing map/reduce
Assumptions:
● Hadoop 1.0.1+
● new Hadoop API (i.e. the one in org.apache.hadoop.
    mapreduce, not mapred)
●   Cassandra 1.1
job configuration
● use Hadoop job configuration API to set general
    Hadoop configuration parameters
●   use Cassandra ConfigHelper to set Cassandra-specific
    configuration parameters
job configuration - example
Hadoop config related to Cassandra:

job.setInputFormatClass(ColumnFamilyInputFormat.class)
job.setOutputFormatClass(ColumnFamilyOutputFormat.class)
job.setOutputKeyClass(ByteBuffer.class)
job.setOutputValueClass(List.class)
job configuration - example
Cassandra-specific config:

ConfigHelper.setInputRpcPort(conf, "9160")
ConfigHelper.setInputInitialAddress(conf, cassHost)
ConfigHelper.setInputPartitioner(conf, partitionerClassName)
ConfigHelper.setInputColumnFamily(conf, keyspace, columnFamily)
ConfigHelper.setInputSlicePredicate(conf, someQueryPredicate)
ConfigHelper.setOutputRpcPort(conf, "9160")
ConfigHelper.setOutputInitialAddress(conf, cassHost)
ConfigHelper.setOutputPartitioner(conf, partitionerClassName)
ConfigHelper.setOutputColumnFamily(conf, keyspace, columnFamily)
the mapper
● input key is a ByteBuffer
● input value is a SortedMap<ByteBuffer, IColumn>,
    where the ByteBuffer is the column name
●   use Cassandra ByteBufferUtil to read/write
    ByteBuffers
●   map output must be a Writable as in "standard" M/R
example mapper
Group integer values by column name:

for (IColumn col : columns.values())
   context.write(new Text(ByteBufferUtil.string(col.name())),
                 new LongWritable(ByteBufferUtil.toLong(col.value())));
the reducer
● input is whatever you output from map
● output key is ByteBuffer == row key in Cassandra
● output value is List<Mutation> == the columns you want
    to change
●   uses Thrift API to define Mutations
example reducer
Compute average of each column's values:
int sum = 0;
int count = 0;
for (LongWritable val : values) {
   sum += val.get();
   count++;
}

Column c = new Column();
c.setName(ByteBufferUtil.bytes("Average"));
c.setValue(ByteBufferUtil.bytes(sum/count));
c.setTimestamp(System.currentTimeMillis());

Mutation m = new Mutation();
m.setColumn_or_supercolumn(new ColumnOrSuperColumn());
m.column_or_supercolumn.setColumn(c);
context.write(ByteBufferUtil.bytes(key.toString()), Collections.singletonList(m));
let's have a look ...
outputting to multiple CFs
● currently not supported out of the box
● patch available against 1.1 to allow this using Hadoop
    MultipleOutputs API
●   not necessary to patch production Cassandra, only the
    Cassandra jars on Hadoop's classpath
●   patch will not break existing reducers
●   to use:
    ○ git clone Cassandra 1.1 source
    ○ use git apply to apply patch
    ○ build with ant
    ○ put the new jars on Hadoop classpath
multiple output example
// in job configuration
ConfigHelper.setOutputKeyspace(conf, keyspace);

MultipleOutputs.addNamedOutput(job, columnFamily1,
   ColumnFamilyOutputFormat.class, ByteBuffer.class,
   List.class);

MultipleOutputs.addNamedOutput(job, columnFamily2,
   ColumnFamilyOutputFormat.class, ByteBuffer.class,
   List.class);
multiple output example
// in reducer
private MultipleOutputs _output;

public void setup(Context context) {
   _output = new MultipleOutputs(context);
}

public void cleanup(Context context) {
   _output.close();
}
multiple output example
// in reducer - writing to an output

Mutation m1 = ...
Mutation m2 = ...

_output.write(columnFamily1, ByteBufferUtil.bytes(key1),
m1);
_output.write(columnFamily2, ByteBufferUtil.bytes(key2),
m2);
let's have a look ...
map/reduce in scala
why? because this:

values.groupBy(_.name)
       .map { case (name, vals) => name -> vals.map(ByteBufferUtil.toInt).sum }
       .filter { case (name, sum) => sum > 100 }
       .foreach { case (name, sum) =>
          context.write(new Text(ByteBufferUtil.string(name)), new IntWritable(sum))
        }


... would require many more lines of code in Java and be less readable!
map/reduce in scala
... and because we can parallelize it by doing this:

values.par // make this collection parallel
      .groupBy(_.name)
       .map { case (name, vals) => name -> vals.map(ByteBufferUtil.toInt).sum }
       .filter { case (name, sum) => sum > 100 }
       .foreach { case (name, sum) =>
          context.write(new Text(ByteBufferUtil.string(name)), new IntWritable(sum))
        }


(!!!)
map/reduce in scala
● add Scala library to Hadoop classpath
● import scala.collection.JavaConversions._ to get Scala
    coolness with Java collections
●   main executable is an empty class with the
    implementation in a companion object
●   context parm in map & reduce methods must be:
    ○   context: Mapper[ByteBuffer, SortedMap[ByteBuffer, IColumn],
        <MapKeyClass>, <MapValueClass>]#Context
    ○   context: Reducer[<MapKeyClass>, <MapValueClass>, ByteBuffer,
        java.util.List[Mutation]]#Context
    ○   it will compile but not function correctly without the type information
let's have a look ...
pig
●   allows ad hoc queries against Cassandra
●   schema applied at query time
●   can run in local mode or using Hadoop cluster
●   useful for data exploration, mass updates, back-
    populating data, etc.
●   cassandra supported via CassandraStorage
●   both LoadFunc and StoreFunc
●   now comes pre-built in Cassandra distribution
●   pygmalion adds some useful Cassandra UDFs --
    github.com/jeromatron/pygmalion
pig - getting set up
● download pig - better not to use package manager
● get the pig_cassandra script from Cassandra source
    (it's in examples/pig) -- or I put it on github with all the
    samples
●   set environment variables as described in README
    ○   JAVA_HOME = location of your Java install
    ○   PIG_HOME = location of your Pig install
    ○   PIG_CONF_DIR = location of your Hadoop config files
    ○   PIG_INITIAL_ADDRESS = address of one Cassandra analytics node
    ○   PIG_RPC_PORT = Thrift port -- default 9160
    ○   PIG_PARTITIONER = your Cassandra partitioner
pig - loading data
Standard CF:
rows = LOAD 'cassandra://<keyspace>/<column_family>'
        USING CassandraStorage()
        AS (key: bytearray, cols: bag {col: tuple(name,value)});

Super CF:
rows = LOAD 'cassandra://<keyspace>/<column_family>'
         USING CassandraStorage()
         AS (key: bytearray, cols: bag {(supercol, subcols: bag {(name,
value)})});
pig - writing data
STORE command:
STORE <relation_name>
INTO 'cassandra://<keyspace>/<column_family>'
USING CassandraStorage();

Standard CF schema:
(key, {(col_name, value), (col_name, value)});

Super CF schema:
(key, {supercol: {(col_name, value), (col_name, value)},
      {supercol: {(col_name, value), (col_name, value)}});
let's have a look ...
questions?
check out my github for this presentation and code
samples!



Robbie Strickland
Managing Architect, E3 Greentech

rostrickland@gmail.com
linkedin.com/in/robbiestrickland
github.com/rstrickland

More Related Content

What's hot

Intro to py spark (and cassandra)
Intro to py spark (and cassandra)Intro to py spark (and cassandra)
Intro to py spark (and cassandra)Jon Haddad
 
Lightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and CassandraLightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and Cassandranickmbailey
 
Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Matthias Niehoff
 
Zero to Streaming: Spark and Cassandra
Zero to Streaming: Spark and CassandraZero to Streaming: Spark and Cassandra
Zero to Streaming: Spark and CassandraRussell Spitzer
 
Hive Anatomy
Hive AnatomyHive Anatomy
Hive Anatomynzhang
 
Hadoop Pig: MapReduce the easy way!
Hadoop Pig: MapReduce the easy way!Hadoop Pig: MapReduce the easy way!
Hadoop Pig: MapReduce the easy way!Nathan Bijnens
 
Cassandra and Spark: Optimizing for Data Locality
Cassandra and Spark: Optimizing for Data LocalityCassandra and Spark: Optimizing for Data Locality
Cassandra and Spark: Optimizing for Data LocalityRussell Spitzer
 
PySpark in practice slides
PySpark in practice slidesPySpark in practice slides
PySpark in practice slidesDat Tran
 
Hadoop + Cassandra: Fast queries on data lakes, and wikipedia search tutorial.
Hadoop + Cassandra: Fast queries on data lakes, and  wikipedia search tutorial.Hadoop + Cassandra: Fast queries on data lakes, and  wikipedia search tutorial.
Hadoop + Cassandra: Fast queries on data lakes, and wikipedia search tutorial.Natalino Busa
 
Big data, just an introduction to Hadoop and Scripting Languages
Big data, just an introduction to Hadoop and Scripting LanguagesBig data, just an introduction to Hadoop and Scripting Languages
Big data, just an introduction to Hadoop and Scripting LanguagesCorley S.r.l.
 
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)Spark Summit
 
Analytics with Cassandra & Spark
Analytics with Cassandra & SparkAnalytics with Cassandra & Spark
Analytics with Cassandra & SparkMatthias Niehoff
 
Cassandra Community Webinar | In Case of Emergency Break Glass
Cassandra Community Webinar | In Case of Emergency Break GlassCassandra Community Webinar | In Case of Emergency Break Glass
Cassandra Community Webinar | In Case of Emergency Break GlassDataStax
 
Hive dirty/beautiful hacks in TD
Hive dirty/beautiful hacks in TDHive dirty/beautiful hacks in TD
Hive dirty/beautiful hacks in TDSATOSHI TAGOMORI
 
Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014Konrad Malawski
 
Lightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and CassandraLightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and CassandraRustam Aliyev
 
Practical Hadoop using Pig
Practical Hadoop using PigPractical Hadoop using Pig
Practical Hadoop using PigDavid Wellman
 
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLabAdvanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
 

What's hot (20)

Intro to py spark (and cassandra)
Intro to py spark (and cassandra)Intro to py spark (and cassandra)
Intro to py spark (and cassandra)
 
Lightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and CassandraLightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and Cassandra
 
Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra
 
Zero to Streaming: Spark and Cassandra
Zero to Streaming: Spark and CassandraZero to Streaming: Spark and Cassandra
Zero to Streaming: Spark and Cassandra
 
Hive Anatomy
Hive AnatomyHive Anatomy
Hive Anatomy
 
Scala+data
Scala+dataScala+data
Scala+data
 
Hadoop Pig: MapReduce the easy way!
Hadoop Pig: MapReduce the easy way!Hadoop Pig: MapReduce the easy way!
Hadoop Pig: MapReduce the easy way!
 
Cassandra and Spark: Optimizing for Data Locality
Cassandra and Spark: Optimizing for Data LocalityCassandra and Spark: Optimizing for Data Locality
Cassandra and Spark: Optimizing for Data Locality
 
PySpark in practice slides
PySpark in practice slidesPySpark in practice slides
PySpark in practice slides
 
Hadoop + Cassandra: Fast queries on data lakes, and wikipedia search tutorial.
Hadoop + Cassandra: Fast queries on data lakes, and  wikipedia search tutorial.Hadoop + Cassandra: Fast queries on data lakes, and  wikipedia search tutorial.
Hadoop + Cassandra: Fast queries on data lakes, and wikipedia search tutorial.
 
Big data, just an introduction to Hadoop and Scripting Languages
Big data, just an introduction to Hadoop and Scripting LanguagesBig data, just an introduction to Hadoop and Scripting Languages
Big data, just an introduction to Hadoop and Scripting Languages
 
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)
Cassandra and Spark: Optimizing for Data Locality-(Russell Spitzer, DataStax)
 
Analytics with Cassandra & Spark
Analytics with Cassandra & SparkAnalytics with Cassandra & Spark
Analytics with Cassandra & Spark
 
Cassandra Community Webinar | In Case of Emergency Break Glass
Cassandra Community Webinar | In Case of Emergency Break GlassCassandra Community Webinar | In Case of Emergency Break Glass
Cassandra Community Webinar | In Case of Emergency Break Glass
 
Hive dirty/beautiful hacks in TD
Hive dirty/beautiful hacks in TDHive dirty/beautiful hacks in TD
Hive dirty/beautiful hacks in TD
 
Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014
 
Lightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and CassandraLightning fast analytics with Spark and Cassandra
Lightning fast analytics with Spark and Cassandra
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop Ecosystem
 
Practical Hadoop using Pig
Practical Hadoop using PigPractical Hadoop using Pig
Practical Hadoop using Pig
 
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLabAdvanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
 

Viewers also liked

Cassandra/Hadoop Integration
Cassandra/Hadoop IntegrationCassandra/Hadoop Integration
Cassandra/Hadoop IntegrationJeremy Hanna
 
How to Build a Smart Data Lake Using Semantics
How to Build a Smart Data Lake Using SemanticsHow to Build a Smart Data Lake Using Semantics
How to Build a Smart Data Lake Using SemanticsCambridge Semantics
 
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...Helena Edelson
 
Cassandra talk @JUG Lausanne, 2012.06.14
Cassandra talk @JUG Lausanne, 2012.06.14Cassandra talk @JUG Lausanne, 2012.06.14
Cassandra talk @JUG Lausanne, 2012.06.14Benoit Perroud
 
Big Data Grows Up - A (re)introduction to Cassandra
Big Data Grows Up - A (re)introduction to CassandraBig Data Grows Up - A (re)introduction to Cassandra
Big Data Grows Up - A (re)introduction to CassandraRobbie Strickland
 
Bulk Loading into Cassandra
Bulk Loading into CassandraBulk Loading into Cassandra
Bulk Loading into CassandraBrian Hess
 
A Journey to Modern Apps with Containers, Microservices and Big Data
A Journey to Modern Apps with Containers, Microservices and Big DataA Journey to Modern Apps with Containers, Microservices and Big Data
A Journey to Modern Apps with Containers, Microservices and Big DataEdward Hsu
 
Lambda at Weather Scale - Cassandra Summit 2015
Lambda at Weather Scale - Cassandra Summit 2015Lambda at Weather Scale - Cassandra Summit 2015
Lambda at Weather Scale - Cassandra Summit 2015Robbie Strickland
 
Cassandra Hadoop Best Practices by Jeremy Hanna
Cassandra Hadoop Best Practices by Jeremy HannaCassandra Hadoop Best Practices by Jeremy Hanna
Cassandra Hadoop Best Practices by Jeremy HannaModern Data Stack France
 
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®Transforming Data Management and Time to Insight with Anzo Smart Data Lake®
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®Cambridge Semantics
 
Hi Speed Datawarehousing
Hi Speed DatawarehousingHi Speed Datawarehousing
Hi Speed DatawarehousingJos van Dongen
 
Data Scientist 101 BI Dutch
Data Scientist 101 BI DutchData Scientist 101 BI Dutch
Data Scientist 101 BI DutchJos van Dongen
 
Introduction to Anzo Unstructured
Introduction to Anzo UnstructuredIntroduction to Anzo Unstructured
Introduction to Anzo UnstructuredCambridge Semantics
 
SnappyData overview NikeTechTalk 11/19/15
SnappyData overview NikeTechTalk 11/19/15SnappyData overview NikeTechTalk 11/19/15
SnappyData overview NikeTechTalk 11/19/15SnappyData
 
Database Shootout: What's best for BI?
Database Shootout: What's best for BI?Database Shootout: What's best for BI?
Database Shootout: What's best for BI?Jos van Dongen
 
Graph-based Discovery and Analytics at Enterprise Scale
Graph-based Discovery and Analytics at Enterprise ScaleGraph-based Discovery and Analytics at Enterprise Scale
Graph-based Discovery and Analytics at Enterprise ScaleCambridge Semantics
 
Always On: Building Highly Available Applications on Cassandra
Always On: Building Highly Available Applications on CassandraAlways On: Building Highly Available Applications on Cassandra
Always On: Building Highly Available Applications on CassandraRobbie Strickland
 
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch AnalysisNoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch AnalysisHelena Edelson
 
Scalable On-Demand Hadoop Clusters with Docker and Mesos
Scalable On-Demand Hadoop Clusters with Docker and MesosScalable On-Demand Hadoop Clusters with Docker and Mesos
Scalable On-Demand Hadoop Clusters with Docker and MesosDataWorks Summit
 

Viewers also liked (20)

Cassandra/Hadoop Integration
Cassandra/Hadoop IntegrationCassandra/Hadoop Integration
Cassandra/Hadoop Integration
 
How to Build a Smart Data Lake Using Semantics
How to Build a Smart Data Lake Using SemanticsHow to Build a Smart Data Lake Using Semantics
How to Build a Smart Data Lake Using Semantics
 
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...
Building Reactive Distributed Systems For Streaming Big Data, Analytics & Mac...
 
Cassandra talk @JUG Lausanne, 2012.06.14
Cassandra talk @JUG Lausanne, 2012.06.14Cassandra talk @JUG Lausanne, 2012.06.14
Cassandra talk @JUG Lausanne, 2012.06.14
 
Big Data Grows Up - A (re)introduction to Cassandra
Big Data Grows Up - A (re)introduction to CassandraBig Data Grows Up - A (re)introduction to Cassandra
Big Data Grows Up - A (re)introduction to Cassandra
 
Bulk Loading into Cassandra
Bulk Loading into CassandraBulk Loading into Cassandra
Bulk Loading into Cassandra
 
A Journey to Modern Apps with Containers, Microservices and Big Data
A Journey to Modern Apps with Containers, Microservices and Big DataA Journey to Modern Apps with Containers, Microservices and Big Data
A Journey to Modern Apps with Containers, Microservices and Big Data
 
NLP from scratch
NLP from scratch NLP from scratch
NLP from scratch
 
Lambda at Weather Scale - Cassandra Summit 2015
Lambda at Weather Scale - Cassandra Summit 2015Lambda at Weather Scale - Cassandra Summit 2015
Lambda at Weather Scale - Cassandra Summit 2015
 
Cassandra Hadoop Best Practices by Jeremy Hanna
Cassandra Hadoop Best Practices by Jeremy HannaCassandra Hadoop Best Practices by Jeremy Hanna
Cassandra Hadoop Best Practices by Jeremy Hanna
 
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®Transforming Data Management and Time to Insight with Anzo Smart Data Lake®
Transforming Data Management and Time to Insight with Anzo Smart Data Lake®
 
Hi Speed Datawarehousing
Hi Speed DatawarehousingHi Speed Datawarehousing
Hi Speed Datawarehousing
 
Data Scientist 101 BI Dutch
Data Scientist 101 BI DutchData Scientist 101 BI Dutch
Data Scientist 101 BI Dutch
 
Introduction to Anzo Unstructured
Introduction to Anzo UnstructuredIntroduction to Anzo Unstructured
Introduction to Anzo Unstructured
 
SnappyData overview NikeTechTalk 11/19/15
SnappyData overview NikeTechTalk 11/19/15SnappyData overview NikeTechTalk 11/19/15
SnappyData overview NikeTechTalk 11/19/15
 
Database Shootout: What's best for BI?
Database Shootout: What's best for BI?Database Shootout: What's best for BI?
Database Shootout: What's best for BI?
 
Graph-based Discovery and Analytics at Enterprise Scale
Graph-based Discovery and Analytics at Enterprise ScaleGraph-based Discovery and Analytics at Enterprise Scale
Graph-based Discovery and Analytics at Enterprise Scale
 
Always On: Building Highly Available Applications on Cassandra
Always On: Building Highly Available Applications on CassandraAlways On: Building Highly Available Applications on Cassandra
Always On: Building Highly Available Applications on Cassandra
 
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch AnalysisNoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
NoLambda: Combining Streaming, Ad-Hoc, Machine Learning and Batch Analysis
 
Scalable On-Demand Hadoop Clusters with Docker and Mesos
Scalable On-Demand Hadoop Clusters with Docker and MesosScalable On-Demand Hadoop Clusters with Docker and Mesos
Scalable On-Demand Hadoop Clusters with Docker and Mesos
 

Similar to Online Analytics with Hadoop and Cassandra

On Rails with Apache Cassandra
On Rails with Apache CassandraOn Rails with Apache Cassandra
On Rails with Apache CassandraStu Hood
 
Scaling web applications with cassandra presentation
Scaling web applications with cassandra presentationScaling web applications with cassandra presentation
Scaling web applications with cassandra presentationMurat Çakal
 
Cassandra Explained
Cassandra ExplainedCassandra Explained
Cassandra ExplainedEric Evans
 
Riak add presentation
Riak add presentationRiak add presentation
Riak add presentationIlya Bogunov
 
Cassandra Talk: Austin JUG
Cassandra Talk: Austin JUGCassandra Talk: Austin JUG
Cassandra Talk: Austin JUGStu Hood
 
Cassandra overview
Cassandra overviewCassandra overview
Cassandra overviewSean Murphy
 
Cassandra Explained
Cassandra ExplainedCassandra Explained
Cassandra ExplainedEric Evans
 
Apache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsApache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsJulien Anguenot
 
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...DataStax
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2Gal Marder
 
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...IndicThreads
 
Cassandra Java APIs Old and New – A Comparison
Cassandra Java APIs Old and New – A ComparisonCassandra Java APIs Old and New – A Comparison
Cassandra Java APIs Old and New – A Comparisonshsedghi
 
Spark real world use cases and optimizations
Spark real world use cases and optimizationsSpark real world use cases and optimizations
Spark real world use cases and optimizationsGal Marder
 
Redis: REmote DIctionary Server
Redis: REmote DIctionary ServerRedis: REmote DIctionary Server
Redis: REmote DIctionary ServerEzra Zygmuntowicz
 
Apache Cassandra at the Geek2Geek Berlin
Apache Cassandra at the Geek2Geek BerlinApache Cassandra at the Geek2Geek Berlin
Apache Cassandra at the Geek2Geek BerlinChristian Johannsen
 

Similar to Online Analytics with Hadoop and Cassandra (20)

On Rails with Apache Cassandra
On Rails with Apache CassandraOn Rails with Apache Cassandra
On Rails with Apache Cassandra
 
Scaling web applications with cassandra presentation
Scaling web applications with cassandra presentationScaling web applications with cassandra presentation
Scaling web applications with cassandra presentation
 
Cassandra Explained
Cassandra ExplainedCassandra Explained
Cassandra Explained
 
Riak add presentation
Riak add presentationRiak add presentation
Riak add presentation
 
Cassandra Talk: Austin JUG
Cassandra Talk: Austin JUGCassandra Talk: Austin JUG
Cassandra Talk: Austin JUG
 
Cassandra overview
Cassandra overviewCassandra overview
Cassandra overview
 
Cassandra 101
Cassandra 101Cassandra 101
Cassandra 101
 
Cassandra Explained
Cassandra ExplainedCassandra Explained
Cassandra Explained
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
Apache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsApache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentials
 
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2
 
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
 
Cassandra
CassandraCassandra
Cassandra
 
Cassandra Java APIs Old and New – A Comparison
Cassandra Java APIs Old and New – A ComparisonCassandra Java APIs Old and New – A Comparison
Cassandra Java APIs Old and New – A Comparison
 
Spark real world use cases and optimizations
Spark real world use cases and optimizationsSpark real world use cases and optimizations
Spark real world use cases and optimizations
 
Redis: REmote DIctionary Server
Redis: REmote DIctionary ServerRedis: REmote DIctionary Server
Redis: REmote DIctionary Server
 
Apache Cassandra at the Geek2Geek Berlin
Apache Cassandra at the Geek2Geek BerlinApache Cassandra at the Geek2Geek Berlin
Apache Cassandra at the Geek2Geek Berlin
 
Unit 3
Unit 3Unit 3
Unit 3
 

Recently uploaded

Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKJago de Vreede
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfOverkill Security
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 

Recently uploaded (20)

Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

Online Analytics with Hadoop and Cassandra

  • 1. Online Analytics with Hadoop and Cassandra Robbie Strickland
  • 2. who am i? Robbie Strickland Managing Architect, E3 Greentech rostrickland@gmail.com linkedin.com/in/robbiestrickland github.com/rstrickland
  • 3. in scope ● offline vs online analytics ● Cassandra overview ● Hadoop-Cassandra integration ● writing map/reduce against Cassandra ● Scala map/reduce ● Pig with Cassandra ● questions
  • 4. common hadoop usage pattern ● batch analysis of unstructured/semi-structured data ● analysis of transactional data requires moving it into HDFS or offline HBase cluster ● use Sqoop, Flume, or some other ETL process to aggregate and pull in data from external sources ● well-suited for high-latency offline analysis
  • 5. the online use case ● need to analyze live data from a transactional DB ● too much data or too little time to efficiently move it offline before analysis ● need to immediately feed results back into live system ● reduce complexity/improve maintainability of M/R jobs ● run Pig scripts against live data (i.e. data exploration) Cassandra + Hadoop makes this possible!
  • 6. what is cassandra? ● open-source NoSQL column store ● distributed hash table ● tunable consistency ● dynamic linear scalability ● masterless, peer-to-peer architecture ● highly fault tolerant ● extremely low latency reads/writes ● keeps active data set in RAM -- tunable ● all writes are sequential ● automatic replication
  • 7. cassandra data model ● keyspace roughly == database ● column family roughly == table ● each row has a key ● each row is a collection of columns or supercolumns ● a column is a key/value pair ● a supercolumn is a collection of columns ● column names/quantity do not have to be the same across rows in the same column family ● keys & column names are often composite ● supports secondary indexes with low cardinality
  • 8. cassandra data model keyspace keyspace standard_column_family super_column_family key1 key1 column1 : value1 supercol1 column2 : value2 column1 : value1 key2 column2 : value2 column1 : value1 supercol2 column3 : value3 column3 : value3 column4 : value4 key2 supercol3 column4 : value4 supercol4 column5 : value5
  • 9. let's have a look ...
  • 10. cassandra query model ● get by key(s) ● range slice - from key A to key B in sort order (partitioner choice matters here) ● column slice - from column A to column B ordered according to specified comparator type ● indexed slice - all rows that match a predicate (requires a secondary index to be in place for the column in the predicate) ● "write it like you intend to read it" ● Cassandra Query Language (CQL)
  • 11. cassandra topology Node 1 Start token 0 Node 2 Node 3 Start token Start token 2^127 / 3 2 * 2^127 / 3
  • 12. cassandra client access ● uses Thrift as the underlying protocol with a number of language bindings ● CLI is useful for simple CRUD and schema changes ● high-level clients available for most common languages -- this is the recommended approach ● CQL supported by some clients
  • 13. cassandra with hadoop ● supports input via ColumnFamilyInputFormat ● supports output via ColumnFamilyOutputFormat ● HDFS still used by Hadoop unless you use DataStax Enterprise version ● supports Pig input/output via CassandraStorage ● preserves data locality ● use 2 "data centers" -- one for transactional and the other for analytics ● replication between data centers is automatic and immediate
  • 14. cassandra with hadoop - topology Node 4 Node 1 Start token Hadoop N1 + 1 NameNode Start token JobTracker 0 DataNode TaskTracker Replica Replica Replication Node 5 Node 6 Node 2 Node 3 Start token Start token Start token Start token N2 + 1 N3 + 1 2^127 / 3 2 * 2^127 / 3 DataNode DataNode Replica TaskTracker TaskTracker Data Center 1 - Transactional Data Center 2 - Analytics
  • 15. configuring transactional nodes ● install Cassandra 1.1 ● config properties are in cassandra.yaml (in /etc/cassandra or $CASSANDRA_HOME/conf) and in create keyspace options ● placement strategy = NetworkTopologyStrategy ● strategy options = DC1:2, DC2:1 ● endpoint snitch = PropertyFileSnitch -- put DC/rack specs in cassandra-topology.properties ● designate at least one node as seed(s) ● make sure listen & rpc addresses are correct ● use nodetool -h <host> ring to verify everyone is talking
  • 16. configuring analytics nodes ● configure Cassandra using transactional instructions ● insure there are no token collisions ● install Hadoop 1.0.1+ TaskTracker & DataNode ● add Cassandra jars & dependencies to Hadoop classpath -- either copy them to $HADOOP_HOME/lib or add the path in hadoop-env.sh ● Cassandra jars are typically in /usr/share/cassandra, and dependencies in /usr/share/cassandra/lib if installed using a package manager ● you may need to increase Cassandra RPC timeout
  • 17. let's have a look ...
  • 18. writing map/reduce Assumptions: ● Hadoop 1.0.1+ ● new Hadoop API (i.e. the one in org.apache.hadoop. mapreduce, not mapred) ● Cassandra 1.1
  • 19. job configuration ● use Hadoop job configuration API to set general Hadoop configuration parameters ● use Cassandra ConfigHelper to set Cassandra-specific configuration parameters
  • 20. job configuration - example Hadoop config related to Cassandra: job.setInputFormatClass(ColumnFamilyInputFormat.class) job.setOutputFormatClass(ColumnFamilyOutputFormat.class) job.setOutputKeyClass(ByteBuffer.class) job.setOutputValueClass(List.class)
  • 21. job configuration - example Cassandra-specific config: ConfigHelper.setInputRpcPort(conf, "9160") ConfigHelper.setInputInitialAddress(conf, cassHost) ConfigHelper.setInputPartitioner(conf, partitionerClassName) ConfigHelper.setInputColumnFamily(conf, keyspace, columnFamily) ConfigHelper.setInputSlicePredicate(conf, someQueryPredicate) ConfigHelper.setOutputRpcPort(conf, "9160") ConfigHelper.setOutputInitialAddress(conf, cassHost) ConfigHelper.setOutputPartitioner(conf, partitionerClassName) ConfigHelper.setOutputColumnFamily(conf, keyspace, columnFamily)
  • 22. the mapper ● input key is a ByteBuffer ● input value is a SortedMap<ByteBuffer, IColumn>, where the ByteBuffer is the column name ● use Cassandra ByteBufferUtil to read/write ByteBuffers ● map output must be a Writable as in "standard" M/R
  • 23. example mapper Group integer values by column name: for (IColumn col : columns.values()) context.write(new Text(ByteBufferUtil.string(col.name())), new LongWritable(ByteBufferUtil.toLong(col.value())));
  • 24. the reducer ● input is whatever you output from map ● output key is ByteBuffer == row key in Cassandra ● output value is List<Mutation> == the columns you want to change ● uses Thrift API to define Mutations
  • 25. example reducer Compute average of each column's values: int sum = 0; int count = 0; for (LongWritable val : values) { sum += val.get(); count++; } Column c = new Column(); c.setName(ByteBufferUtil.bytes("Average")); c.setValue(ByteBufferUtil.bytes(sum/count)); c.setTimestamp(System.currentTimeMillis()); Mutation m = new Mutation(); m.setColumn_or_supercolumn(new ColumnOrSuperColumn()); m.column_or_supercolumn.setColumn(c); context.write(ByteBufferUtil.bytes(key.toString()), Collections.singletonList(m));
  • 26. let's have a look ...
  • 27. outputting to multiple CFs ● currently not supported out of the box ● patch available against 1.1 to allow this using Hadoop MultipleOutputs API ● not necessary to patch production Cassandra, only the Cassandra jars on Hadoop's classpath ● patch will not break existing reducers ● to use: ○ git clone Cassandra 1.1 source ○ use git apply to apply patch ○ build with ant ○ put the new jars on Hadoop classpath
  • 28. multiple output example // in job configuration ConfigHelper.setOutputKeyspace(conf, keyspace); MultipleOutputs.addNamedOutput(job, columnFamily1, ColumnFamilyOutputFormat.class, ByteBuffer.class, List.class); MultipleOutputs.addNamedOutput(job, columnFamily2, ColumnFamilyOutputFormat.class, ByteBuffer.class, List.class);
  • 29. multiple output example // in reducer private MultipleOutputs _output; public void setup(Context context) { _output = new MultipleOutputs(context); } public void cleanup(Context context) { _output.close(); }
  • 30. multiple output example // in reducer - writing to an output Mutation m1 = ... Mutation m2 = ... _output.write(columnFamily1, ByteBufferUtil.bytes(key1), m1); _output.write(columnFamily2, ByteBufferUtil.bytes(key2), m2);
  • 31. let's have a look ...
  • 32. map/reduce in scala why? because this: values.groupBy(_.name) .map { case (name, vals) => name -> vals.map(ByteBufferUtil.toInt).sum } .filter { case (name, sum) => sum > 100 } .foreach { case (name, sum) => context.write(new Text(ByteBufferUtil.string(name)), new IntWritable(sum)) } ... would require many more lines of code in Java and be less readable!
  • 33. map/reduce in scala ... and because we can parallelize it by doing this: values.par // make this collection parallel .groupBy(_.name) .map { case (name, vals) => name -> vals.map(ByteBufferUtil.toInt).sum } .filter { case (name, sum) => sum > 100 } .foreach { case (name, sum) => context.write(new Text(ByteBufferUtil.string(name)), new IntWritable(sum)) } (!!!)
  • 34. map/reduce in scala ● add Scala library to Hadoop classpath ● import scala.collection.JavaConversions._ to get Scala coolness with Java collections ● main executable is an empty class with the implementation in a companion object ● context parm in map & reduce methods must be: ○ context: Mapper[ByteBuffer, SortedMap[ByteBuffer, IColumn], <MapKeyClass>, <MapValueClass>]#Context ○ context: Reducer[<MapKeyClass>, <MapValueClass>, ByteBuffer, java.util.List[Mutation]]#Context ○ it will compile but not function correctly without the type information
  • 35. let's have a look ...
  • 36. pig ● allows ad hoc queries against Cassandra ● schema applied at query time ● can run in local mode or using Hadoop cluster ● useful for data exploration, mass updates, back- populating data, etc. ● cassandra supported via CassandraStorage ● both LoadFunc and StoreFunc ● now comes pre-built in Cassandra distribution ● pygmalion adds some useful Cassandra UDFs -- github.com/jeromatron/pygmalion
  • 37. pig - getting set up ● download pig - better not to use package manager ● get the pig_cassandra script from Cassandra source (it's in examples/pig) -- or I put it on github with all the samples ● set environment variables as described in README ○ JAVA_HOME = location of your Java install ○ PIG_HOME = location of your Pig install ○ PIG_CONF_DIR = location of your Hadoop config files ○ PIG_INITIAL_ADDRESS = address of one Cassandra analytics node ○ PIG_RPC_PORT = Thrift port -- default 9160 ○ PIG_PARTITIONER = your Cassandra partitioner
  • 38. pig - loading data Standard CF: rows = LOAD 'cassandra://<keyspace>/<column_family>' USING CassandraStorage() AS (key: bytearray, cols: bag {col: tuple(name,value)}); Super CF: rows = LOAD 'cassandra://<keyspace>/<column_family>' USING CassandraStorage() AS (key: bytearray, cols: bag {(supercol, subcols: bag {(name, value)})});
  • 39. pig - writing data STORE command: STORE <relation_name> INTO 'cassandra://<keyspace>/<column_family>' USING CassandraStorage(); Standard CF schema: (key, {(col_name, value), (col_name, value)}); Super CF schema: (key, {supercol: {(col_name, value), (col_name, value)}, {supercol: {(col_name, value), (col_name, value)}});
  • 40. let's have a look ...
  • 41. questions? check out my github for this presentation and code samples! Robbie Strickland Managing Architect, E3 Greentech rostrickland@gmail.com linkedin.com/in/robbiestrickland github.com/rstrickland