SlideShare uma empresa Scribd logo
1 de 20
Artimon
                  Mathias Herberts - @herberts




Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
Arkéa Real Time Information Monitoring
Scalable metrics collection and analysis framework




▪ Collects metrics called 'variable instances'
▪ Dynamic discovery, (almost) no conf needed
▪ Rich analysis library
▪ Fits IT and business needs
▪ Adapts to third party metrics
▪ Uses Flume and Kafka for transport
What's in a variable instance?

          name{label0=value0,label1=value1,...}


▪ name is the name of the variable
   linux.proc.diskstats.reads.ms
   hadoop.jobtracker.maps_completed

▪ Labels are text strings, they characterize a variable instance
   Some labels are automatically set dc, rack, module, context, uuid, ...
   Others are user defined

▪ Variable instances are typed
   INTEGER, DOUBLE, BOOLEAN, STRING

▪ Variable instance values are timestamped
▪ Variable instance values are Thrift objects
Exporting metrics


▪ Metrics are exported via a Thrift service
▪ Each MonitoringContext (context=...) exposes a service
▪ MCs register their dynamic port in ZooKeeper
   /zk/artimon/contexts/xxx/ip:port:uuid

▪ MonitoringContext wrapped in a BookKeeper class
   public interface ArtimonBookKeeper {
     public void setIntegerVar(String name, final Map<String,String> labels, long value);
     public long addToIntegerVar(String name, final Map<String,String> labels, long delta);
     public Long getIntegerVar(String name, final Map<String,String> labels);
     public void removeIntegerVar(String name, final Map<String,String> labels);

       public   void setDoubleVar(String name, final Map<String,String> labels, double value);
       public   double addToDoubleVar(String name, final Map<String,String> labels, double delta);
       public   Double getDoubleVar(String name, final Map<String,String> labels);
       public   void removeDoubleVar(String name, final Map<String,String> labels);

       public void setStringVar(String name, final Map<String,String> labels, String value);
       public String getStringVar(String name, final Map<String,String> labels);
       public void removeStringVar(String name, final Map<String,String> labels);

       public void setBooleanVar(String name, final Map<String,String> labels, boolean value);
       public Boolean getBooleanVar(String name, final Map<String,String> labels);
       public void removeBooleanVar(String name, final Map<String,String> labels);
   }
Exporting metrics


▪ Thrift service returns the latest values of known instances
▪ ZooKeeper not mandatory, can use a fixed port
▪ Artimon written in Java
▪ Checklist for porting to other languages
   ▪ Thrift support

   ▪ Optional ZooKeeper support
Collecting Metrics


▪ Flume launched on every machine
▪ 'artimon' source
   artimon(hosts, contexts, vars[, polling_interval])
   eg artimon(“self”,”*”,”~.*”)

   ▪ Watches ZooKeeper for contexts to poll

   ▪ Periodically collects latest values

▪ 'artimonProxy' decorator
   artimonProxy([[port],[ttl]])

   ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
Collecting Metrics


▪ Simulated flow using flume.flow event attribute
   artimon(...) | artimonProxy(...) value("flume.flow", "artimon")...

▪ Events batched and gzipped
   ... value("flume.flow", "artimon") batch(100,100) gzip() ...

▪ Kafka sink
   kafkasink(topic, propname=value...)

   ... gzip()   < failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")")
                 ? diskFailover("-kafka-flume-artimon")
                 insistentAppend stubbornAppend insistentOpen
                 failChain("{ lazyOpen => { stubbornAppend => %s } } ",
                 "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >;


                      ~ kafkaDFOChain
Consuming Metrics


▪ Kafka source
   kafkasource(topic, propname=value...)

▪ Custom BytesWritableEscapedSeqFileEventSink
   bwseqfile(filename[, idle[, maxage]])
   bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon");

   ▪ N archivers in a single Kafka consumer group (same groupid)
   ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables
   ▪ Can add archivers if metrics flow increases
   ▪ Ability to manipulate those metrics using Pig
Consuming Metrics


▪ In-Memory history data (VarHistoryMemStore, VHMS)
  artimonVHMSDecorator(nthreads[0],
                       bucketspan[60000],
                       bucketcount[60],
                       gc_grace_period[600000],
                       port[27847],
                       gc_period[60000],
                       get_limit[100000]) null;

  ▪ Each VHMS in its own Kafka consumer group (each gets all metrics)
  ▪ Multiple VHMS with different granularities
     60x1', 48x5', 96*15', 72*24h
  ▪ Filter to ignore some metrics for some VHMS
     artimonFilter("!~linux.proc.pid.*")
Why Kafka?


▪ Initially used tsink/rpcSource
   ▪ No ZooKeeper use for Flume (avoid flapping)
   ▪ Collector load balancing using DNS
   ▪ Worked fine for some time...

▪ But as metrics volume was increasing...
   ▪ DNS load balancing not ideal (herd effect when restarting collectors)
   ▪ Flume's push architecture got in the way
      Slowdowns not considered failures
      Had to add mechanisms for dropping metrics when congested
Why Kafka?


▪ Kafka to the rescue! Source/sink coded in less than a day
   ▪ Acts as a buffer between metrics producers and consumers
   ▪ ZooKeeper based discovery and load balancing
   ▪ Easily scalable, just add brokers

▪ Performance has increased
   ▪ Producers now push their metrics in less than 2s
   ▪ VHMS/Archivers consume at their pace with no producer slowdown
       => 1.3M metrics in ~10s


▪ Ability to go back in time when restarting a VHMS
▪ Flume still valuable, notably for DFO (collect metrics during NP)
▪ Artimon [pull] Flume [push] Kafka [pull] Flume
Analyzing Metrics


▪ Groovy library
   ▪ Talks to a VHMS to retrieve time series
   ▪ Manipulates time series, individually or in bulk

▪ Groovy scripts for monitoring
   ▪ Use the Artimon library

   ▪ IT Monitoring
   ▪ BAM (Business Activity Monitoring)

▪ Ability to generate alerts

   ▪ Each alert is an Artimon metric (archived for SLA compliance)
   ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
Analyzing Metrics


▪ Bulk time series manipulation
   ▪ Equivalence classes based on labels (same values, same class)
   ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes

          import static com.arkea.artimon.groovy.LibArtimon.*

          vhmssrc=export['vhms.60']

          dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000)

          dfvars = select(sel_isfinite(), dfvars)

          free = select(dfvars, '=linux.df.bytes.free', [:])
          capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:]))

          usage = sort(apply(op_div(), free, capacity, [], 'freespace'))

          used50   =   select(sel_lt(0.50),    usage)
          used75   =   select(sel_lt(0.25),    usage)
          used90   =   select(sel_lt(0.10),    usage)
          used95   =   select(sel_lt(0.05),    usage)

          println   'Volumes   occupied   >   50%:   '   +   used50.size()
          println   'Volumes   occupied   >   75%:   '   +   used75.size()
          println   'Volumes   occupied   >   90%:   '   +   used90.size()
          println   'Volumes   occupied   >   95%:   '   +   used95.size()

          println 'Total volumes: ' + usage.size()


                        Same script can handle any number of volumes, dynamically
Analyzing Metrics


▪ Map paradigm
  ▪ Apply a Groovy closure on n consecutive values of a time serie
     map(closure, vars, nticks, name)
     Predefined map_delta(), map_rate(), map_{min,max,mean}()
     map(map_delta(), vars, 2, '+:delta')

▪ Reduce paradigm
  ▪ Apply a Groovy closure on equivalence classes
  ▪ Generate one time serie for each equivalence class
     reduceby(closure, vars, bylabels, name, relabels)
     Predefined red_sum(), red_{min,max,mean,sd}()
     reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
Analyzing Metrics


▪ A whole lot more
   getvars      selectbylabels   relabel
   fetch        partition        fillprevious
   find         top              fillnext
   findlabels   bottom           fillvalue
   display      outliers         map
   makevar      dropOutliers     reduceby
   nticks       resample         settype
   timespan     normalize        triggerAlert
   lasttick     standardize      clearAlert
   values       sort             CDF
   targets      scalar           PDF
   getlabels    ntrim            Percentile
   dump         timetrim         sparkline
   select       apply            ...
Third Party Metrics


▪ JMX Agent
       ▪ Expose any JMX metrics as Artimon metrics
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    525762846
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    511880426
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    492037666
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    436896839
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0}    333034505
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163186980
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    163047011
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162916713
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162704303
jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0}    162565421
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8835417
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8794654
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8793525
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8741181
jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     8019699
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51999885
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51991203
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51986318
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     51980976
jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats}     48008009
Third Party Metrics


▪ Flume artimonReader source
   artimonReader(context, periodicity, file0[, fileX])

   ▪ Periodically reads files containing text representation of metrics
       [timestamp] name{labels} value


   ▪ Exposes those metrics via the standard mechanism
   ▪ Simply create scripts which write those files and add them to crontab
   ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ...

       1319718601000   mysql.bytes_received{db=mysql-roller} 296493399
       1319718601000   mysql.bytes_sent{db=mysql-roller} 3655368849
       1319718601000   mysql.com_admin_commands{db=mysql-roller} 673028
       1319718601000   mysql.com_alter_db{db=mysql-roller} 0
       1319718601000   mysql.com_alter_table{db=mysql-roller} 0
       1319718601000   mysql.com_analyze{db=mysql-roller} 0
       1319718601000   mysql.com_backup_table{db=mysql-roller} 0
PostMortem Analysis


▪ Extract specific metrics from HDFS
   ▪ Simple Pig script

▪ Load extracted metrics into a local VHMS
▪ Interact with VHMS using Groovy
   ▪ Existing scripts can be ran directly if parameterized correctly

▪ Interesting use cases
   ▪ Did we respect our SLAs? Would the new SLAs be respected too?
   ▪ What happened pre/post incident?
   ▪ Would a modified alert condition have triggered an alert?
Should we OpenSource this?




  http://www.arkea.com/



         @herberts

Mais conteúdo relacionado

Mais procurados

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webclkao
 
2013 0928 programming by cuda
2013 0928 programming by cuda2013 0928 programming by cuda
2013 0928 programming by cuda小明 王
 
Performance Profiling in Rust
Performance Profiling in RustPerformance Profiling in Rust
Performance Profiling in RustInfluxData
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenPostgresOpen
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Michaël Figuière
 
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeSCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeJeff Frost
 
Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Cong Zhang
 
Workshop on command line tools - day 2
Workshop on command line tools - day 2Workshop on command line tools - day 2
Workshop on command line tools - day 2Leandro Lima
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraDataStax
 
Workshop on command line tools - day 1
Workshop on command line tools - day 1Workshop on command line tools - day 1
Workshop on command line tools - day 1Leandro Lima
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraMario IC
 
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloadingkinan keshkeh
 
Lua tech talk
Lua tech talkLua tech talk
Lua tech talkLocaweb
 
Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]RootedCON
 
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Gavin Guo
 
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...Altinity Ltd
 

Mais procurados (20)

AnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time webAnyMQ, Hippie, and the real-time web
AnyMQ, Hippie, and the real-time web
 
2013 0928 programming by cuda
2013 0928 programming by cuda2013 0928 programming by cuda
2013 0928 programming by cuda
 
Performance Profiling in Rust
Performance Profiling in RustPerformance Profiling in Rust
Performance Profiling in Rust
 
Nginx-lua
Nginx-luaNginx-lua
Nginx-lua
 
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres OpenJohn Melesky - Federating Queries Using Postgres FDW @ Postgres Open
John Melesky - Federating Queries Using Postgres FDW @ Postgres Open
 
Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!Cassandra summit 2013 - DataStax Java Driver Unleashed!
Cassandra summit 2013 - DataStax Java Driver Unleashed!
 
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade DowntimeSCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
 
Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2Using ngx_lua in UPYUN 2
Using ngx_lua in UPYUN 2
 
Workshop on command line tools - day 2
Workshop on command line tools - day 2Workshop on command line tools - day 2
Workshop on command line tools - day 2
 
A22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle HaileyA22 Introduction to DTrace by Kyle Hailey
A22 Introduction to DTrace by Kyle Hailey
 
C*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with CassandraC*ollege Credit: Creating Your First App in Java with Cassandra
C*ollege Credit: Creating Your First App in Java with Cassandra
 
Workshop on command line tools - day 1
Workshop on command line tools - day 1Workshop on command line tools - day 1
Workshop on command line tools - day 1
 
Workshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - SuestraWorkshop Infrastructure as Code - Suestra
Workshop Infrastructure as Code - Suestra
 
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading2 BytesC++ course_2014_c3_ function basics&parameters and overloading
2 BytesC++ course_2014_c3_ function basics&parameters and overloading
 
Lua tech talk
Lua tech talkLua tech talk
Lua tech talk
 
Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]Pepe Vila - Cache and Syphilis [rooted2019]
Pepe Vila - Cache and Syphilis [rooted2019]
 
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
Migrating KSM page causes the VM lock up as the KSM page merging list is too ...
 
Ordered Record Collection
Ordered Record CollectionOrdered Record Collection
Ordered Record Collection
 
Top Node.js Metrics to Watch
Top Node.js Metrics to WatchTop Node.js Metrics to Watch
Top Node.js Metrics to Watch
 
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
ClickHouse Unleashed 2020: Our Favorite New Features for Your Analytical Appl...
 

Destaque

Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Mathias Herberts
 
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationIoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationMathias Herberts
 
Programmation fonctionnelle
Programmation fonctionnelleProgrammation fonctionnelle
Programmation fonctionnelleJean Detoeuf
 
Scala : programmation fonctionnelle
Scala : programmation fonctionnelleScala : programmation fonctionnelle
Scala : programmation fonctionnelleMICHRAFY MUSTAFA
 
The Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptThe Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptNorman Richards
 
Programmation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptProgrammation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptLoïc Knuchel
 
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Loïc Knuchel
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaModern Data Stack France
 

Destaque (11)

Dev ops Monitoring
Dev ops   MonitoringDev ops   Monitoring
Dev ops Monitoring
 
Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121Big Data - Open Coffee Brest - 20121121
Big Data - Open Coffee Brest - 20121121
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop Ecosystem
 
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentationIoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
IoT Silicon Valley - Cityzen Sciences and Cityzen Data presentation
 
Programmation fonctionnelle
Programmation fonctionnelleProgrammation fonctionnelle
Programmation fonctionnelle
 
Scala : programmation fonctionnelle
Scala : programmation fonctionnelleScala : programmation fonctionnelle
Scala : programmation fonctionnelle
 
The Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScriptThe Lambda Calculus and The JavaScript
The Lambda Calculus and The JavaScript
 
Programmation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScriptProgrammation fonctionnelle en JavaScript
Programmation fonctionnelle en JavaScript
 
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
Comprendre la programmation fonctionnelle, Blend Web Mix le 02/11/2016
 
Cisco OpenSOC
Cisco OpenSOCCisco OpenSOC
Cisco OpenSOC
 
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel ArkéaMathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
Mathias Herberts fait le retour d'expérience Hadoop au Crédit Mutuel Arkéa
 

Semelhante a Artimon - Apache Flume (incubating) NYC Meetup 20111108

Kafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingKafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingYaroslav Tkachenko
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixInfluxData
 
Clojure ♥ cassandra
Clojure ♥ cassandra Clojure ♥ cassandra
Clojure ♥ cassandra Max Penet
 
Realtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQRealtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQXin Wang
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!Guido Schmutz
 
Containerizing Distributed Pipes
Containerizing Distributed PipesContainerizing Distributed Pipes
Containerizing Distributed Pipesinside-BigData.com
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraDeependra Ariyadewa
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...DataWorks Summit/Hadoop Summit
 
Introduction To Apache Mesos
Introduction To Apache MesosIntroduction To Apache Mesos
Introduction To Apache MesosJoe Stein
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applicationsRobert Sanders
 
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...ScyllaDB
 
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammOSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammNETWAYS
 
Monitoring VoIP Systems
Monitoring VoIP SystemsMonitoring VoIP Systems
Monitoring VoIP Systemssipgate
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaAOE
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseKostas Tzoumas
 
Building and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosBuilding and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosJoe Stein
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustEvan Chan
 

Semelhante a Artimon - Apache Flume (incubating) NYC Meetup 20111108 (20)

Kafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processingKafka Streams: the easiest way to start with stream processing
Kafka Streams: the easiest way to start with stream processing
 
Flux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul DixFlux and InfluxDB 2.0 by Paul Dix
Flux and InfluxDB 2.0 by Paul Dix
 
Clojure ♥ cassandra
Clojure ♥ cassandra Clojure ♥ cassandra
Clojure ♥ cassandra
 
Realtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQRealtime Statistics based on Apache Storm and RocketMQ
Realtime Statistics based on Apache Storm and RocketMQ
 
KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!KSQL - Stream Processing simplified!
KSQL - Stream Processing simplified!
 
Containerizing Distributed Pipes
Containerizing Distributed PipesContainerizing Distributed Pipes
Containerizing Distributed Pipes
 
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and CassandraStore and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
 
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
 
Introduction To Apache Mesos
Introduction To Apache MesosIntroduction To Apache Mesos
Introduction To Apache Mesos
 
Solr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene EuroconSolr @ Etsy - Apache Lucene Eurocon
Solr @ Etsy - Apache Lucene Eurocon
 
Productionalizing spark streaming applications
Productionalizing spark streaming applicationsProductionalizing spark streaming applications
Productionalizing spark streaming applications
 
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
Scylla Summit 2018: Introducing ValuStor, A Memcached Alternative Made to Run...
 
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian DammOSMC 2014: Monitoring VoIP Systems | Sebastian Damm
OSMC 2014: Monitoring VoIP Systems | Sebastian Damm
 
Monitoring VoIP Systems
Monitoring VoIP SystemsMonitoring VoIP Systems
Monitoring VoIP Systems
 
Immutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS LambdaImmutable Deployments with AWS CloudFormation and AWS Lambda
Immutable Deployments with AWS CloudFormation and AWS Lambda
 
Flink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San JoseFlink Streaming Hadoop Summit San Jose
Flink Streaming Hadoop Summit San Jose
 
Building and Deploying Application to Apache Mesos
Building and Deploying Application to Apache MesosBuilding and Deploying Application to Apache Mesos
Building and Deploying Application to Apache Mesos
 
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
Deep Learning for Computer Vision: Software Frameworks (UPC 2016)
 
Data Pipeline at Tapad
Data Pipeline at TapadData Pipeline at Tapad
Data Pipeline at Tapad
 
Porting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to RustPorting a Streaming Pipeline from Scala to Rust
Porting a Streaming Pipeline from Scala to Rust
 

Mais de Mathias Herberts

2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...Mathias Herberts
 
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoopMathias Herberts
 
WebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachWebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachMathias Herberts
 

Mais de Mathias Herberts (6)

2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
2019-09-25 Paris Time Series Meetup - Warp 10 - Advanced Time Series Technolo...
 
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop20170516 hug france-warp10-time-seriesanalysisontopofhadoop
20170516 hug france-warp10-time-seriesanalysisontopofhadoop
 
Big Data Tribute
Big Data TributeBig Data Tribute
Big Data Tribute
 
Hadoop Pig Syntax Card
Hadoop Pig Syntax CardHadoop Pig Syntax Card
Hadoop Pig Syntax Card
 
Hadoop Pig
Hadoop PigHadoop Pig
Hadoop Pig
 
WebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic ApproachWebScale Computing and Big Data a Pragmatic Approach
WebScale Computing and Big Data a Pragmatic Approach
 

Último

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Último (20)

Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

Artimon - Apache Flume (incubating) NYC Meetup 20111108

  • 1. Artimon Mathias Herberts - @herberts Apache Flume (incubating) User Meetup, Hadoop World 2011 NYC Edition
  • 2. Arkéa Real Time Information Monitoring
  • 3. Scalable metrics collection and analysis framework ▪ Collects metrics called 'variable instances' ▪ Dynamic discovery, (almost) no conf needed ▪ Rich analysis library ▪ Fits IT and business needs ▪ Adapts to third party metrics ▪ Uses Flume and Kafka for transport
  • 4. What's in a variable instance? name{label0=value0,label1=value1,...} ▪ name is the name of the variable linux.proc.diskstats.reads.ms hadoop.jobtracker.maps_completed ▪ Labels are text strings, they characterize a variable instance Some labels are automatically set dc, rack, module, context, uuid, ... Others are user defined ▪ Variable instances are typed INTEGER, DOUBLE, BOOLEAN, STRING ▪ Variable instance values are timestamped ▪ Variable instance values are Thrift objects
  • 5. Exporting metrics ▪ Metrics are exported via a Thrift service ▪ Each MonitoringContext (context=...) exposes a service ▪ MCs register their dynamic port in ZooKeeper /zk/artimon/contexts/xxx/ip:port:uuid ▪ MonitoringContext wrapped in a BookKeeper class public interface ArtimonBookKeeper { public void setIntegerVar(String name, final Map<String,String> labels, long value); public long addToIntegerVar(String name, final Map<String,String> labels, long delta); public Long getIntegerVar(String name, final Map<String,String> labels); public void removeIntegerVar(String name, final Map<String,String> labels); public void setDoubleVar(String name, final Map<String,String> labels, double value); public double addToDoubleVar(String name, final Map<String,String> labels, double delta); public Double getDoubleVar(String name, final Map<String,String> labels); public void removeDoubleVar(String name, final Map<String,String> labels); public void setStringVar(String name, final Map<String,String> labels, String value); public String getStringVar(String name, final Map<String,String> labels); public void removeStringVar(String name, final Map<String,String> labels); public void setBooleanVar(String name, final Map<String,String> labels, boolean value); public Boolean getBooleanVar(String name, final Map<String,String> labels); public void removeBooleanVar(String name, final Map<String,String> labels); }
  • 6. Exporting metrics ▪ Thrift service returns the latest values of known instances ▪ ZooKeeper not mandatory, can use a fixed port ▪ Artimon written in Java ▪ Checklist for porting to other languages ▪ Thrift support ▪ Optional ZooKeeper support
  • 7. Collecting Metrics ▪ Flume launched on every machine ▪ 'artimon' source artimon(hosts, contexts, vars[, polling_interval]) eg artimon(“self”,”*”,”~.*”) ▪ Watches ZooKeeper for contexts to poll ▪ Periodically collects latest values ▪ 'artimonProxy' decorator artimonProxy([[port],[ttl]]) ▪ Exposes all collected metrics via a local port (No ZooKeeper, no loop)
  • 8. Collecting Metrics ▪ Simulated flow using flume.flow event attribute artimon(...) | artimonProxy(...) value("flume.flow", "artimon")... ▪ Events batched and gzipped ... value("flume.flow", "artimon") batch(100,100) gzip() ... ▪ Kafka sink kafkasink(topic, propname=value...) ... gzip() < failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") ? diskFailover("-kafka-flume-artimon") insistentAppend stubbornAppend insistentOpen failChain("{ lazyOpen => { stubbornAppend => %s } } ", "kafkasink("flume-artimon","zk.connect=quorum:2181/zk/kafka/prod")") >; ~ kafkaDFOChain
  • 9. Consuming Metrics ▪ Kafka source kafkasource(topic, propname=value...) ▪ Custom BytesWritableEscapedSeqFileEventSink bwseqfile(filename[, idle[, maxage]]) bwseqfile("hdfs://nn/hdfs/data/artimon/%Y/%m/%d/flume-artimon"); ▪ N archivers in a single Kafka consumer group (same groupid) ▪ Metrics stored in HDFS as serialized Thrift in BytesWritables ▪ Can add archivers if metrics flow increases ▪ Ability to manipulate those metrics using Pig
  • 10. Consuming Metrics ▪ In-Memory history data (VarHistoryMemStore, VHMS) artimonVHMSDecorator(nthreads[0], bucketspan[60000], bucketcount[60], gc_grace_period[600000], port[27847], gc_period[60000], get_limit[100000]) null; ▪ Each VHMS in its own Kafka consumer group (each gets all metrics) ▪ Multiple VHMS with different granularities 60x1', 48x5', 96*15', 72*24h ▪ Filter to ignore some metrics for some VHMS artimonFilter("!~linux.proc.pid.*")
  • 11. Why Kafka? ▪ Initially used tsink/rpcSource ▪ No ZooKeeper use for Flume (avoid flapping) ▪ Collector load balancing using DNS ▪ Worked fine for some time... ▪ But as metrics volume was increasing... ▪ DNS load balancing not ideal (herd effect when restarting collectors) ▪ Flume's push architecture got in the way Slowdowns not considered failures Had to add mechanisms for dropping metrics when congested
  • 12. Why Kafka? ▪ Kafka to the rescue! Source/sink coded in less than a day ▪ Acts as a buffer between metrics producers and consumers ▪ ZooKeeper based discovery and load balancing ▪ Easily scalable, just add brokers ▪ Performance has increased ▪ Producers now push their metrics in less than 2s ▪ VHMS/Archivers consume at their pace with no producer slowdown => 1.3M metrics in ~10s ▪ Ability to go back in time when restarting a VHMS ▪ Flume still valuable, notably for DFO (collect metrics during NP) ▪ Artimon [pull] Flume [push] Kafka [pull] Flume
  • 13. Analyzing Metrics ▪ Groovy library ▪ Talks to a VHMS to retrieve time series ▪ Manipulates time series, individually or in bulk ▪ Groovy scripts for monitoring ▪ Use the Artimon library ▪ IT Monitoring ▪ BAM (Business Activity Monitoring) ▪ Ability to generate alerts ▪ Each alert is an Artimon metric (archived for SLA compliance) ▪ Propagate to Nagios, Kafka in the work (CEP for alert manager)
  • 14. Analyzing Metrics ▪ Bulk time series manipulation ▪ Equivalence classes based on labels (same values, same class) ▪ Apply ops (+ - / * closure) to 2 variables based on equivalence classes import static com.arkea.artimon.groovy.LibArtimon.* vhmssrc=export['vhms.60'] dfvars = fetch(vhmssrc,'~^linux.df.bytes.(free|capacity)$',[:],60000,-30000) dfvars = select(sel_isfinite(), dfvars) free = select(dfvars, '=linux.df.bytes.free', [:]) capacity = select(sel_gt(0), select(dfvars, '=linux.df.bytes.capacity', [:])) usage = sort(apply(op_div(), free, capacity, [], 'freespace')) used50 = select(sel_lt(0.50), usage) used75 = select(sel_lt(0.25), usage) used90 = select(sel_lt(0.10), usage) used95 = select(sel_lt(0.05), usage) println 'Volumes occupied > 50%: ' + used50.size() println 'Volumes occupied > 75%: ' + used75.size() println 'Volumes occupied > 90%: ' + used90.size() println 'Volumes occupied > 95%: ' + used95.size() println 'Total volumes: ' + usage.size() Same script can handle any number of volumes, dynamically
  • 15. Analyzing Metrics ▪ Map paradigm ▪ Apply a Groovy closure on n consecutive values of a time serie map(closure, vars, nticks, name) Predefined map_delta(), map_rate(), map_{min,max,mean}() map(map_delta(), vars, 2, '+:delta') ▪ Reduce paradigm ▪ Apply a Groovy closure on equivalence classes ▪ Generate one time serie for each equivalence class reduceby(closure, vars, bylabels, name, relabels) Predefined red_sum(), red_{min,max,mean,sd}() reduceby(red_mean(), temps, ['dc','rack'], '+:rackavg',[:])
  • 16. Analyzing Metrics ▪ A whole lot more getvars selectbylabels relabel fetch partition fillprevious find top fillnext findlabels bottom fillvalue display outliers map makevar dropOutliers reduceby nticks resample settype timespan normalize triggerAlert lasttick standardize clearAlert values sort CDF targets scalar PDF getlabels ntrim Percentile dump timetrim sparkline select apply ...
  • 17. Third Party Metrics ▪ JMX Agent ▪ Expose any JMX metrics as Artimon metrics jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 525762846 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 511880426 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 492037666 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 436896839 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-artimon-0} 333034505 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163186980 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 163047011 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162916713 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162704303 jmx.kafka.log.logstats:currentoffset{context=kafka,jmx.domain=kafka,jmx.type=kafka.logs.flume-syslog-0} 162565421 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8835417 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8794654 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8793525 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8741181 jmx.kafka.network.socketserverstats:numfetchrequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 8019699 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51999885 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51991203 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51986318 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 51980976 jmx.kafka.network.socketserverstats:numproducerequests{context=kafka,jmx.domain=kafka,jmx.type=kafka.SocketServerStats} 48008009
  • 18. Third Party Metrics ▪ Flume artimonReader source artimonReader(context, periodicity, file0[, fileX]) ▪ Periodically reads files containing text representation of metrics [timestamp] name{labels} value ▪ Exposes those metrics via the standard mechanism ▪ Simply create scripts which write those files and add them to crontab ▪ Successfully used for NAS, Samba, MQSeries, SNMP, MySQL, ... 1319718601000 mysql.bytes_received{db=mysql-roller} 296493399 1319718601000 mysql.bytes_sent{db=mysql-roller} 3655368849 1319718601000 mysql.com_admin_commands{db=mysql-roller} 673028 1319718601000 mysql.com_alter_db{db=mysql-roller} 0 1319718601000 mysql.com_alter_table{db=mysql-roller} 0 1319718601000 mysql.com_analyze{db=mysql-roller} 0 1319718601000 mysql.com_backup_table{db=mysql-roller} 0
  • 19. PostMortem Analysis ▪ Extract specific metrics from HDFS ▪ Simple Pig script ▪ Load extracted metrics into a local VHMS ▪ Interact with VHMS using Groovy ▪ Existing scripts can be ran directly if parameterized correctly ▪ Interesting use cases ▪ Did we respect our SLAs? Would the new SLAs be respected too? ▪ What happened pre/post incident? ▪ Would a modified alert condition have triggered an alert?
  • 20. Should we OpenSource this? http://www.arkea.com/ @herberts