SlideShare a Scribd company logo
Counting Fast
      (Part I)

          Sergei Vassilvitskii
        Columbia University
Computational Social Science
              March 8, 2013
Computers are fast!

  Servers:
   – 3.5+ Ghz

  Laptops:
   – 2.0 - 3 Ghz

  Phones:
   – 1.0-1.5 GHz



  Overall: Executes billions of operations per second!




                              2                     Sergei Vassilvitskii
But Data is Big!

  Datasets are huge:
   – Social Graphs (Billions of nodes, each with hundreds of edges)
      • Terabytes (million million bytes)
   – Pictures, Videos, associated metadata:
      • Petabytes (million billion bytes!)




                                             3                  Sergei Vassilvitskii
Computers are getting faster
  Moore’s law (1965!):
   – Number of transistors on a chip doubles every two years.




                                    4                           Sergei Vassilvitskii
Computers are getting faster

  Moore’s law (1965!):
   – Number of transistors on a chip doubles every two years.



  For a few decades:
   – The speed of chips doubled every 24 months.


  Now:
   – The number of cores doubling
   – Speed staying roughly the same




                                      5                         Sergei Vassilvitskii
But Data is Getting Even Bigger

  Unknown author, 1981 (?):
   – “640K ought to be enough for anyone”



  Eric Schmidt, March 2013:
   – “There were 5 exabytes of information created between the dawn of
     civilization through 2003, but that much information is now created
     every 2 days, and the pace is increasing.”




                                    6                          Sergei Vassilvitskii
Data Sizes
  What is Big Data?
   – MB in 1980s          Hard Drive Capacity
   – GB in 1990s
   – TB in 2000s
   – PB in 2010s




                      7                         Sergei Vassilvitskii
Working with Big Data

  Two datasets of numbers:
   – Want to find the intersection (common values)
   – Why?
     • Data cleaning (these are missing values)
     • Data mining (these are unique in some way)




                                         8          Sergei Vassilvitskii
Working with Big Data

  Two datasets of numbers:
   – Want to find the intersection (common values)
   – Why?
      • Data cleaning (these are missing values)
      • Data mining (these are unique in some way)


   – How long should it take?
      •   Each   dataset   has   10 numbers?
      •   Each   dataset   has   10k numbers?
      •   Each   dataset   has   10M numbers?
      •   Each   dataset   has   10B numbers?
      •   Each   dataset   has   10T numbers?




                                                9    Sergei Vassilvitskii
How to Find Intersections?




                    10       Sergei Vassilvitskii
Idea 1: Scan

  Look at every number in list 1:
   – Scan through dataset 2, see if you find a match


  common_elements = 0
  for number in dataset1:
     for number2 in dataset2:
        if number1 == number2:
           common_elements +=1




                                   11                 Sergei Vassilvitskii
Idea 1: Scanning

 For each element in dataset 1, scan through dataset 2, see if it’s present


 common_elements = 0
 for number in dataset1:
    for number2 in dataset2:
       if number1 == number2:
          common_elements +=1


 Analysis: Number of times if statement executed?
 – |dataset2| for every iteration of outer loop
 – |dataset1| * |dataset2| in total




                                       12                     Sergei Vassilvitskii
Idea 1: Scanning

 Analysis: Number of times if statement executed?
 – |dataset2| for every iteration of outer loop
 – |dataset1| * |dataset2| in total


 Running time:
 – 100M * 100M = 1016 comparisons in total
 – At 1B (109) comparisons / second




                                       13           Sergei Vassilvitskii
Idea 1: Scanning

 Analysis: Number of times if statement executed?
 – |dataset2| for every iteration of outer loop
 – |dataset1| * |dataset2| in total


 Running time:
 – 100M * 100M = 1016 comparisons in total
 – At 1B (109) comparisons / second
 – 107 seconds ~ 4 months!


 – Even with 1000 computers: 104 seconds -- 2.5 hours!




                                       14                Sergei Vassilvitskii
Idea 2: Sorting

  Suppose both sets are sorted
   – Keep pointers to each
   – Check for match, increase the smaller pointer



  [Blackboard]




                                    15               Sergei Vassilvitskii
Idea 2: Sorting

sorted1 = sorted(dataset1)
sorted2 = sorted(dataset2)
pointer1, pointer2 = 0
common_elements = 0
while pointer1 < size(dataset1) and pointer2 < size(dataset2):
   if sorted[pointer1] == sorted[pointer2]:
      common_elements+=1
      pointer1+=1; pointer2+=1
   else if sorted[pointer1] < sorted[pointer2]:
      pointer1+=1
   else:
      pointer2+=1

Analysis:
– Number of times if statement executed?
– Increment a counter each time: |dataset1|+|dataset2|

                                     16                  Sergei Vassilvitskii
Idea 2: Sorting

Analysis:
– Number of times if statement executed?
– Increment a counter each time: |dataset1|+|dataset2|


Running time:
–   At most 100M + 100M comparisons
–   At 1B comparisons/second ~ 0.2 seconds
–   Plus cost of sorting! ~1 second per list
–   Total time = 2.2 seconds




                                       17                Sergei Vassilvitskii
Reasoning About Running Times (1)

  Worry about the computation as a function of input size:
  – “If I double my input size, how much longer will it take?”
     •   Linear time (comparisons after sorting): twice as long!
     •   Quadratic time (scan): four (22) times as long
     •   Cubic time (very slow): 8 (23) time as long
     •   Exponential time (untenable):
     •   Sublinear time (uses sampling, skips over input)




                                            18                     Sergei Vassilvitskii
Reasoning About Running Times (2)

  Worry about the computation as a function of input size.
  Worry about order of magnitude, not exact running time:
  – Difference between 2 seconds and 4 seconds much smaller than
    between 2 seconds and 3 months!
     • The scan algorithm does more work in the while loop (but only a constant more
       work) -- 3 comparisons instead of 1.
     • Therefore, still call it linear time




                                        19                              Sergei Vassilvitskii
Reasoning about running time

  Worry about the computation as a function of input size.
  Worry about order of magnitude, not exact running time.



  Captured by the Order notation: O(.)
  – For an input of size n, approximately how long will it take?
  – Scan: O(n2)
  – Comparisons after sorted: O(n)




                                     20                            Sergei Vassilvitskii
Reasoning about running time

  Worry about the computation as a function of input size.
  Worry about order of magnitude, not exact running time.



  Captured by the Order notation: O(.)
  – For an input of size n, approximately how long will it take?
  – Scan: O(n2)
  – Comparisons after sorted: O(n)
  – Sorting = O(n log n)
     • Slightly more than n,
     • But much less than n2.




                                     21                            Sergei Vassilvitskii
Avoiding Sort: Hashing

  Idea 3.
   – Store each number in list1 in a location unique to it
   – For each element in list2, check if its unique location is empty


  [Blackboard]




                                     22                           Sergei Vassilvitskii
Idea 3: Hashing

  table = {}
  for i in range(total):
     table.add(dataset1[i])
  common_elements = 0
  for i in range(total):
     if (table.has(dataset2[i])):
        common_elements+=1

  Analysis:
   – Number of additions to the table: |dataset1|
   – Number of comparisons: |dataset2|
   – If Additions to the table and comparisons are 1B/second
   – Total running time is: 0.2s




                                   23                          Sergei Vassilvitskii
Lots of Details

  Hashing, Sorting, Scanning:
   – All have their advantages
   – Scanning: in place, just passing through the data
   – Sorting: in place (no extra storage), much faster
   – Hashing: not in place, even faster




                                     24                  Sergei Vassilvitskii
Lots of Details

  Hashing, Sorting, Scanning:
   – All have their advantages
   – Scanning: in place, just passing through the data
   – Sorting: in place (no extra storage), much faster
   – Hashing: not in place, even faster


  Reasoning about algorithms:
   – Non trivial (and hard!)
   – A large part of computer science
   – Luckily mostly abstracted




                                     25                  Sergei Vassilvitskii
Break




        26   Sergei Vassilvitskii
Distributed Computation

  Working with large datasets:
  – Most datasets are skewed
  – A few keys are responsible for most of the data
  – Must take skew into account, since averages are misleading




                                   27                        Sergei Vassilvitskii
Additional Cost

  Communication cost
   – Prefer to do more on a single machine (even if it’s doing more work) to
     constantly communicating


   – Why? If you have 1000 machines talking to 1000 machines --- that’s
     1M channels of communication
   – The overall communication cost grows quadratically, which we have
     seen does not scale...




                                    28                          Sergei Vassilvitskii
Analysis at Scale




                    29   Sergei Vassilvitskii
Doing the study

  Suppose you had the data available. What would you do?


  If you have a hypothesis:
   – “Taking both Drug A and Drug B causes a side effect C”?




                                   30                          Sergei Vassilvitskii
Doing the study

  If you have a hypothesis:
   – “Taking both Drug A and Drug B causes a side effect C”?
                                Look at the ratio of observed
                                symptoms over expected
                                  - Expected: fraction of people who
                                  took drug A and saw effect C.
           A           B          - Observed: fraction of people who
                                  took drugs A and B and saw effect C.


                 C




                                   31                          Sergei Vassilvitskii
Doing the study

  If you have a hypothesis:
   – “Taking both Drug A and Drug B causes a side effect C”?
                                Look at the ratio of observed
                                symptoms over expected
                                  - Expected: fraction of people who
                                  took drug A and saw effect C.
           A           B          - Observed: fraction of people who
                                  took drugs A and B and saw effect C.

                                  This is just counting!
                 C




                                   32                          Sergei Vassilvitskii
Doing the study

  Suppose you had the data available. What would you do?


  Discovering hypotheses to test:
   – Many pairs of drugs, some co-occur very often
   – Some side effects are already known




                                   33                Sergei Vassilvitskii

More Related Content

Viewers also liked

Computational Social Science, Lecture 10: Online Experiments
Computational Social Science, Lecture 10: Online ExperimentsComputational Social Science, Lecture 10: Online Experiments
Computational Social Science, Lecture 10: Online Experiments
jakehofman
 
Computational Social Science, Lecture 13: Classification
Computational Social Science, Lecture 13: ClassificationComputational Social Science, Lecture 13: Classification
Computational Social Science, Lecture 13: Classification
jakehofman
 
Computational Social Science, Lecture 06: Networks, Part II
Computational Social Science, Lecture 06: Networks, Part IIComputational Social Science, Lecture 06: Networks, Part II
Computational Social Science, Lecture 06: Networks, Part II
jakehofman
 
Computational Social Science, Lecture 05: Networks, Part I
Computational Social Science, Lecture 05: Networks, Part IComputational Social Science, Lecture 05: Networks, Part I
Computational Social Science, Lecture 05: Networks, Part I
jakehofman
 
Computational Social Science, Lecture 03: Counting at Scale, Part I
Computational Social Science, Lecture 03: Counting at Scale, Part IComputational Social Science, Lecture 03: Counting at Scale, Part I
Computational Social Science, Lecture 03: Counting at Scale, Part I
jakehofman
 
Computational Social Science, Lecture 04: Counting at Scale, Part II
Computational Social Science, Lecture 04: Counting at Scale, Part IIComputational Social Science, Lecture 04: Counting at Scale, Part II
Computational Social Science, Lecture 04: Counting at Scale, Part II
jakehofman
 
Computational Social Science, Lecture 02: An Introduction to Counting
Computational Social Science, Lecture 02: An Introduction to CountingComputational Social Science, Lecture 02: An Introduction to Counting
Computational Social Science, Lecture 02: An Introduction to Counting
jakehofman
 
Sentidos pablo j
Sentidos pablo jSentidos pablo j
Sentidos pablo j
rosayago
 
La anorexia. vera
La anorexia. veraLa anorexia. vera
La anorexia. vera
rosayago
 
Material De Laboratorio
Material De LaboratorioMaterial De Laboratorio
Material De Laboratorio
guest12be2d8
 

Viewers also liked (20)

Computational Social Science, Lecture 10: Online Experiments
Computational Social Science, Lecture 10: Online ExperimentsComputational Social Science, Lecture 10: Online Experiments
Computational Social Science, Lecture 10: Online Experiments
 
Computational Social Science, Lecture 13: Classification
Computational Social Science, Lecture 13: ClassificationComputational Social Science, Lecture 13: Classification
Computational Social Science, Lecture 13: Classification
 
Computational Social Science, Lecture 06: Networks, Part II
Computational Social Science, Lecture 06: Networks, Part IIComputational Social Science, Lecture 06: Networks, Part II
Computational Social Science, Lecture 06: Networks, Part II
 
Computational Social Science, Lecture 05: Networks, Part I
Computational Social Science, Lecture 05: Networks, Part IComputational Social Science, Lecture 05: Networks, Part I
Computational Social Science, Lecture 05: Networks, Part I
 
Computational Social Science, Lecture 03: Counting at Scale, Part I
Computational Social Science, Lecture 03: Counting at Scale, Part IComputational Social Science, Lecture 03: Counting at Scale, Part I
Computational Social Science, Lecture 03: Counting at Scale, Part I
 
Computational Social Science, Lecture 04: Counting at Scale, Part II
Computational Social Science, Lecture 04: Counting at Scale, Part IIComputational Social Science, Lecture 04: Counting at Scale, Part II
Computational Social Science, Lecture 04: Counting at Scale, Part II
 
Computational Social Science, Lecture 02: An Introduction to Counting
Computational Social Science, Lecture 02: An Introduction to CountingComputational Social Science, Lecture 02: An Introduction to Counting
Computational Social Science, Lecture 02: An Introduction to Counting
 
Modeling Social Data, Lecture 6: Regression, Part 1
Modeling Social Data, Lecture 6: Regression, Part 1Modeling Social Data, Lecture 6: Regression, Part 1
Modeling Social Data, Lecture 6: Regression, Part 1
 
Modeling Social Data, Lecture 2: Introduction to Counting
Modeling Social Data, Lecture 2: Introduction to CountingModeling Social Data, Lecture 2: Introduction to Counting
Modeling Social Data, Lecture 2: Introduction to Counting
 
Modeling Social Data, Lecture 1: Overview
Modeling Social Data, Lecture 1: OverviewModeling Social Data, Lecture 1: Overview
Modeling Social Data, Lecture 1: Overview
 
Sentidos pablo j
Sentidos pablo jSentidos pablo j
Sentidos pablo j
 
Grupos en Google
Grupos en GoogleGrupos en Google
Grupos en Google
 
Sistema solar
Sistema solarSistema solar
Sistema solar
 
Netegem el nostre pati
Netegem el nostre patiNetegem el nostre pati
Netegem el nostre pati
 
лабар7
лабар7лабар7
лабар7
 
Ejercicios.especificacion 01 29
Ejercicios.especificacion 01 29Ejercicios.especificacion 01 29
Ejercicios.especificacion 01 29
 
La anorexia. vera
La anorexia. veraLa anorexia. vera
La anorexia. vera
 
Material De Laboratorio
Material De LaboratorioMaterial De Laboratorio
Material De Laboratorio
 
Tic ted bravo nicolás
Tic ted bravo nicolásTic ted bravo nicolás
Tic ted bravo nicolás
 
Making Data Work Better
Making Data Work BetterMaking Data Work Better
Making Data Work Better
 

Similar to Computational Social Science, Lecture 07: Counting Fast, Part I

February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
February 2017 HUG: Data Sketches: A required toolkit for Big Data AnalyticsFebruary 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
Yahoo Developer Network
 
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Matthew Lease
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
comifa7406
 
Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)
Grigory Yaroslavtsev
 

Similar to Computational Social Science, Lecture 07: Counting Fast, Part I (20)

Data streaming fundamentals- EUDAT Summer School (Giuseppe Fiameni, CINECA)
Data streaming fundamentals- EUDAT Summer School (Giuseppe Fiameni, CINECA)Data streaming fundamentals- EUDAT Summer School (Giuseppe Fiameni, CINECA)
Data streaming fundamentals- EUDAT Summer School (Giuseppe Fiameni, CINECA)
 
Blinkdb
BlinkdbBlinkdb
Blinkdb
 
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
 
Bringing back the excitement to data analysis
Bringing back the excitement to data analysisBringing back the excitement to data analysis
Bringing back the excitement to data analysis
 
Data structures
Data structuresData structures
Data structures
 
February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
February 2017 HUG: Data Sketches: A required toolkit for Big Data AnalyticsFebruary 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
February 2017 HUG: Data Sketches: A required toolkit for Big Data Analytics
 
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...
 
Ke yi small summaries for big data
Ke yi small summaries for big dataKe yi small summaries for big data
Ke yi small summaries for big data
 
Intro_2.ppt
Intro_2.pptIntro_2.ppt
Intro_2.ppt
 
Intro.ppt
Intro.pptIntro.ppt
Intro.ppt
 
Intro.ppt
Intro.pptIntro.ppt
Intro.ppt
 
Il tempo vola: rappresentare e manipolare sequenze di eventi e time series co...
Il tempo vola: rappresentare e manipolare sequenze di eventi e time series co...Il tempo vola: rappresentare e manipolare sequenze di eventi e time series co...
Il tempo vola: rappresentare e manipolare sequenze di eventi e time series co...
 
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
 
World widetelescopetecfest
World widetelescopetecfestWorld widetelescopetecfest
World widetelescopetecfest
 
Neural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep LearningNeural Networks for Machine Learning and Deep Learning
Neural Networks for Machine Learning and Deep Learning
 
2.03.Asymptotic_analysis.pptx
2.03.Asymptotic_analysis.pptx2.03.Asymptotic_analysis.pptx
2.03.Asymptotic_analysis.pptx
 
Lecture 1 (bce-7)
Lecture   1 (bce-7)Lecture   1 (bce-7)
Lecture 1 (bce-7)
 
Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)
 
Telling Stories With Data: Class 1
Telling Stories With Data: Class 1Telling Stories With Data: Class 1
Telling Stories With Data: Class 1
 
Black friday logs - Scaling Elasticsearch
Black friday logs - Scaling ElasticsearchBlack friday logs - Scaling Elasticsearch
Black friday logs - Scaling Elasticsearch
 

More from jakehofman

NYC Data Science Meetup: Computational Social Science
NYC Data Science Meetup: Computational Social ScienceNYC Data Science Meetup: Computational Social Science
NYC Data Science Meetup: Computational Social Science
jakehofman
 
Data-driven modeling: Lecture 10
Data-driven modeling: Lecture 10Data-driven modeling: Lecture 10
Data-driven modeling: Lecture 10
jakehofman
 
Data-driven modeling: Lecture 09
Data-driven modeling: Lecture 09Data-driven modeling: Lecture 09
Data-driven modeling: Lecture 09
jakehofman
 

More from jakehofman (17)

Modeling Social Data, Lecture 12: Causality & Experiments, Part 2
Modeling Social Data, Lecture 12: Causality & Experiments, Part 2Modeling Social Data, Lecture 12: Causality & Experiments, Part 2
Modeling Social Data, Lecture 12: Causality & Experiments, Part 2
 
Modeling Social Data, Lecture 11: Causality and Experiments, Part 1
Modeling Social Data, Lecture 11: Causality and Experiments, Part 1Modeling Social Data, Lecture 11: Causality and Experiments, Part 1
Modeling Social Data, Lecture 11: Causality and Experiments, Part 1
 
Modeling Social Data, Lecture 10: Networks
Modeling Social Data, Lecture 10: NetworksModeling Social Data, Lecture 10: Networks
Modeling Social Data, Lecture 10: Networks
 
Modeling Social Data, Lecture 8: Classification
Modeling Social Data, Lecture 8: ClassificationModeling Social Data, Lecture 8: Classification
Modeling Social Data, Lecture 8: Classification
 
Modeling Social Data, Lecture 7: Model complexity and generalization
Modeling Social Data, Lecture 7: Model complexity and generalizationModeling Social Data, Lecture 7: Model complexity and generalization
Modeling Social Data, Lecture 7: Model complexity and generalization
 
Modeling Social Data, Lecture 4: Counting at Scale
Modeling Social Data, Lecture 4: Counting at ScaleModeling Social Data, Lecture 4: Counting at Scale
Modeling Social Data, Lecture 4: Counting at Scale
 
Modeling Social Data, Lecture 3: Data manipulation in R
Modeling Social Data, Lecture 3: Data manipulation in RModeling Social Data, Lecture 3: Data manipulation in R
Modeling Social Data, Lecture 3: Data manipulation in R
 
Modeling Social Data, Lecture 8: Recommendation Systems
Modeling Social Data, Lecture 8: Recommendation SystemsModeling Social Data, Lecture 8: Recommendation Systems
Modeling Social Data, Lecture 8: Recommendation Systems
 
Modeling Social Data, Lecture 6: Classification with Naive Bayes
Modeling Social Data, Lecture 6: Classification with Naive BayesModeling Social Data, Lecture 6: Classification with Naive Bayes
Modeling Social Data, Lecture 6: Classification with Naive Bayes
 
Modeling Social Data, Lecture 3: Counting at Scale
Modeling Social Data, Lecture 3: Counting at ScaleModeling Social Data, Lecture 3: Counting at Scale
Modeling Social Data, Lecture 3: Counting at Scale
 
Modeling Social Data, Lecture 2: Introduction to Counting
Modeling Social Data, Lecture 2: Introduction to CountingModeling Social Data, Lecture 2: Introduction to Counting
Modeling Social Data, Lecture 2: Introduction to Counting
 
Modeling Social Data, Lecture 1: Case Studies
Modeling Social Data, Lecture 1: Case StudiesModeling Social Data, Lecture 1: Case Studies
Modeling Social Data, Lecture 1: Case Studies
 
NYC Data Science Meetup: Computational Social Science
NYC Data Science Meetup: Computational Social ScienceNYC Data Science Meetup: Computational Social Science
NYC Data Science Meetup: Computational Social Science
 
Technical Tricks of Vowpal Wabbit
Technical Tricks of Vowpal WabbitTechnical Tricks of Vowpal Wabbit
Technical Tricks of Vowpal Wabbit
 
Data-driven modeling: Lecture 10
Data-driven modeling: Lecture 10Data-driven modeling: Lecture 10
Data-driven modeling: Lecture 10
 
Data-driven modeling: Lecture 09
Data-driven modeling: Lecture 09Data-driven modeling: Lecture 09
Data-driven modeling: Lecture 09
 
Using Data to Understand the Brain
Using Data to Understand the BrainUsing Data to Understand the Brain
Using Data to Understand the Brain
 

Recently uploaded

IATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdffIATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdff
17thcssbs2
 
The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
 
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdf
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdfPost Exam Fun(da) Intra UEM General Quiz - Finals.pdf
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdf
 
IATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdffIATP How-to Foreign Travel May 2024.pdff
IATP How-to Foreign Travel May 2024.pdff
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
Open Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPointOpen Educational Resources Primer PowerPoint
Open Educational Resources Primer PowerPoint
 
The Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. HenryThe Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. Henry
 
....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf
 
B.ed spl. HI pdusu exam paper-2023-24.pdf
B.ed spl. HI pdusu exam paper-2023-24.pdfB.ed spl. HI pdusu exam paper-2023-24.pdf
B.ed spl. HI pdusu exam paper-2023-24.pdf
 
Gyanartha SciBizTech Quiz slideshare.pptx
Gyanartha SciBizTech Quiz slideshare.pptxGyanartha SciBizTech Quiz slideshare.pptx
Gyanartha SciBizTech Quiz slideshare.pptx
 
The Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational ResourcesThe Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational Resources
 
An Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptxAn Overview of the Odoo 17 Discuss App.pptx
An Overview of the Odoo 17 Discuss App.pptx
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.pptBasic_QTL_Marker-assisted_Selection_Sourabh.ppt
Basic_QTL_Marker-assisted_Selection_Sourabh.ppt
 
Morse OER Some Benefits and Challenges.pptx
Morse OER Some Benefits and Challenges.pptxMorse OER Some Benefits and Challenges.pptx
Morse OER Some Benefits and Challenges.pptx
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resources
 
Post Exam Fun(da) Intra UEM General Quiz 2024 - Prelims q&a.pdf
Post Exam Fun(da) Intra UEM General Quiz 2024 - Prelims q&a.pdfPost Exam Fun(da) Intra UEM General Quiz 2024 - Prelims q&a.pdf
Post Exam Fun(da) Intra UEM General Quiz 2024 - Prelims q&a.pdf
 
Championnat de France de Tennis de table/
Championnat de France de Tennis de table/Championnat de France de Tennis de table/
Championnat de France de Tennis de table/
 
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdfTelling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
 
The basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptxThe basics of sentences session 4pptx.pptx
The basics of sentences session 4pptx.pptx
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
 

Computational Social Science, Lecture 07: Counting Fast, Part I

  • 1. Counting Fast (Part I) Sergei Vassilvitskii Columbia University Computational Social Science March 8, 2013
  • 2. Computers are fast! Servers: – 3.5+ Ghz Laptops: – 2.0 - 3 Ghz Phones: – 1.0-1.5 GHz Overall: Executes billions of operations per second! 2 Sergei Vassilvitskii
  • 3. But Data is Big! Datasets are huge: – Social Graphs (Billions of nodes, each with hundreds of edges) • Terabytes (million million bytes) – Pictures, Videos, associated metadata: • Petabytes (million billion bytes!) 3 Sergei Vassilvitskii
  • 4. Computers are getting faster Moore’s law (1965!): – Number of transistors on a chip doubles every two years. 4 Sergei Vassilvitskii
  • 5. Computers are getting faster Moore’s law (1965!): – Number of transistors on a chip doubles every two years. For a few decades: – The speed of chips doubled every 24 months. Now: – The number of cores doubling – Speed staying roughly the same 5 Sergei Vassilvitskii
  • 6. But Data is Getting Even Bigger Unknown author, 1981 (?): – “640K ought to be enough for anyone” Eric Schmidt, March 2013: – “There were 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days, and the pace is increasing.” 6 Sergei Vassilvitskii
  • 7. Data Sizes What is Big Data? – MB in 1980s Hard Drive Capacity – GB in 1990s – TB in 2000s – PB in 2010s 7 Sergei Vassilvitskii
  • 8. Working with Big Data Two datasets of numbers: – Want to find the intersection (common values) – Why? • Data cleaning (these are missing values) • Data mining (these are unique in some way) 8 Sergei Vassilvitskii
  • 9. Working with Big Data Two datasets of numbers: – Want to find the intersection (common values) – Why? • Data cleaning (these are missing values) • Data mining (these are unique in some way) – How long should it take? • Each dataset has 10 numbers? • Each dataset has 10k numbers? • Each dataset has 10M numbers? • Each dataset has 10B numbers? • Each dataset has 10T numbers? 9 Sergei Vassilvitskii
  • 10. How to Find Intersections? 10 Sergei Vassilvitskii
  • 11. Idea 1: Scan Look at every number in list 1: – Scan through dataset 2, see if you find a match common_elements = 0 for number in dataset1: for number2 in dataset2: if number1 == number2: common_elements +=1 11 Sergei Vassilvitskii
  • 12. Idea 1: Scanning For each element in dataset 1, scan through dataset 2, see if it’s present common_elements = 0 for number in dataset1: for number2 in dataset2: if number1 == number2: common_elements +=1 Analysis: Number of times if statement executed? – |dataset2| for every iteration of outer loop – |dataset1| * |dataset2| in total 12 Sergei Vassilvitskii
  • 13. Idea 1: Scanning Analysis: Number of times if statement executed? – |dataset2| for every iteration of outer loop – |dataset1| * |dataset2| in total Running time: – 100M * 100M = 1016 comparisons in total – At 1B (109) comparisons / second 13 Sergei Vassilvitskii
  • 14. Idea 1: Scanning Analysis: Number of times if statement executed? – |dataset2| for every iteration of outer loop – |dataset1| * |dataset2| in total Running time: – 100M * 100M = 1016 comparisons in total – At 1B (109) comparisons / second – 107 seconds ~ 4 months! – Even with 1000 computers: 104 seconds -- 2.5 hours! 14 Sergei Vassilvitskii
  • 15. Idea 2: Sorting Suppose both sets are sorted – Keep pointers to each – Check for match, increase the smaller pointer [Blackboard] 15 Sergei Vassilvitskii
  • 16. Idea 2: Sorting sorted1 = sorted(dataset1) sorted2 = sorted(dataset2) pointer1, pointer2 = 0 common_elements = 0 while pointer1 < size(dataset1) and pointer2 < size(dataset2): if sorted[pointer1] == sorted[pointer2]: common_elements+=1 pointer1+=1; pointer2+=1 else if sorted[pointer1] < sorted[pointer2]: pointer1+=1 else: pointer2+=1 Analysis: – Number of times if statement executed? – Increment a counter each time: |dataset1|+|dataset2| 16 Sergei Vassilvitskii
  • 17. Idea 2: Sorting Analysis: – Number of times if statement executed? – Increment a counter each time: |dataset1|+|dataset2| Running time: – At most 100M + 100M comparisons – At 1B comparisons/second ~ 0.2 seconds – Plus cost of sorting! ~1 second per list – Total time = 2.2 seconds 17 Sergei Vassilvitskii
  • 18. Reasoning About Running Times (1) Worry about the computation as a function of input size: – “If I double my input size, how much longer will it take?” • Linear time (comparisons after sorting): twice as long! • Quadratic time (scan): four (22) times as long • Cubic time (very slow): 8 (23) time as long • Exponential time (untenable): • Sublinear time (uses sampling, skips over input) 18 Sergei Vassilvitskii
  • 19. Reasoning About Running Times (2) Worry about the computation as a function of input size. Worry about order of magnitude, not exact running time: – Difference between 2 seconds and 4 seconds much smaller than between 2 seconds and 3 months! • The scan algorithm does more work in the while loop (but only a constant more work) -- 3 comparisons instead of 1. • Therefore, still call it linear time 19 Sergei Vassilvitskii
  • 20. Reasoning about running time Worry about the computation as a function of input size. Worry about order of magnitude, not exact running time. Captured by the Order notation: O(.) – For an input of size n, approximately how long will it take? – Scan: O(n2) – Comparisons after sorted: O(n) 20 Sergei Vassilvitskii
  • 21. Reasoning about running time Worry about the computation as a function of input size. Worry about order of magnitude, not exact running time. Captured by the Order notation: O(.) – For an input of size n, approximately how long will it take? – Scan: O(n2) – Comparisons after sorted: O(n) – Sorting = O(n log n) • Slightly more than n, • But much less than n2. 21 Sergei Vassilvitskii
  • 22. Avoiding Sort: Hashing Idea 3. – Store each number in list1 in a location unique to it – For each element in list2, check if its unique location is empty [Blackboard] 22 Sergei Vassilvitskii
  • 23. Idea 3: Hashing table = {} for i in range(total): table.add(dataset1[i]) common_elements = 0 for i in range(total): if (table.has(dataset2[i])): common_elements+=1 Analysis: – Number of additions to the table: |dataset1| – Number of comparisons: |dataset2| – If Additions to the table and comparisons are 1B/second – Total running time is: 0.2s 23 Sergei Vassilvitskii
  • 24. Lots of Details Hashing, Sorting, Scanning: – All have their advantages – Scanning: in place, just passing through the data – Sorting: in place (no extra storage), much faster – Hashing: not in place, even faster 24 Sergei Vassilvitskii
  • 25. Lots of Details Hashing, Sorting, Scanning: – All have their advantages – Scanning: in place, just passing through the data – Sorting: in place (no extra storage), much faster – Hashing: not in place, even faster Reasoning about algorithms: – Non trivial (and hard!) – A large part of computer science – Luckily mostly abstracted 25 Sergei Vassilvitskii
  • 26. Break 26 Sergei Vassilvitskii
  • 27. Distributed Computation Working with large datasets: – Most datasets are skewed – A few keys are responsible for most of the data – Must take skew into account, since averages are misleading 27 Sergei Vassilvitskii
  • 28. Additional Cost Communication cost – Prefer to do more on a single machine (even if it’s doing more work) to constantly communicating – Why? If you have 1000 machines talking to 1000 machines --- that’s 1M channels of communication – The overall communication cost grows quadratically, which we have seen does not scale... 28 Sergei Vassilvitskii
  • 29. Analysis at Scale 29 Sergei Vassilvitskii
  • 30. Doing the study Suppose you had the data available. What would you do? If you have a hypothesis: – “Taking both Drug A and Drug B causes a side effect C”? 30 Sergei Vassilvitskii
  • 31. Doing the study If you have a hypothesis: – “Taking both Drug A and Drug B causes a side effect C”? Look at the ratio of observed symptoms over expected - Expected: fraction of people who took drug A and saw effect C. A B - Observed: fraction of people who took drugs A and B and saw effect C. C 31 Sergei Vassilvitskii
  • 32. Doing the study If you have a hypothesis: – “Taking both Drug A and Drug B causes a side effect C”? Look at the ratio of observed symptoms over expected - Expected: fraction of people who took drug A and saw effect C. A B - Observed: fraction of people who took drugs A and B and saw effect C. This is just counting! C 32 Sergei Vassilvitskii
  • 33. Doing the study Suppose you had the data available. What would you do? Discovering hypotheses to test: – Many pairs of drugs, some co-occur very often – Some side effects are already known 33 Sergei Vassilvitskii