SlideShare a Scribd company logo
1 of 14
SIMILARITY OF
DOCUMENTS BASED ON
VECTOR SPACE MODEL
Introduction

This presentation gives an overview about the problem of
finding documents which are similar and how Vector space
can be used to solve it.

A vector space is a mathematical structure formed by a
collection of elements called vectors, which may be added
together and multiplied ("scaled") by numbers, called scalars
in this context.

A document is a bag of words or a collection of words or
terms. The problem can be easily experienced in the domain
of web search or classification, where the aim is to find out
documents which are similar in context or content.
Introduction

A vector v can be expressed as a sum of elements such as,

v = a1vi1+a2vi2+….+anvin

Where ak are called scalars or weights and vin as the
components or elements.
Vectors

 Now we explore, how a set of documents                                  can be
 represented as vectors in a common vector space.

 V(d) denotes the vector derived from document d, with one
 component for each dictionary term.
               t1
                         V(d2)

                                       V(Q)


                                               V(d1)
                           θ
                                                  t2

The documents in a collection can be viewed as a set of vectors in vector space, in
which there is one axis for every term.
Vectors

In the previous slide, the diagram shows a simple
representation of two document vectors - d1, d2 and a
query vector Q.
The space contains terms – {t1,t2,t3,…tN}, but for simplicity
only two terms are represented since there is a axis for each
term.
The document d1 has components {t1,t3,…} and d2 has
components {t2,…}. So V(d1) is represented closer to axis t1
and V(d2) is closer to t2.

The angle θ represents the closeness of a document vector
to the query vector. And its value is calculated by cosine of θ.
Vectors

Weights
The weight of the components of a document vector can be
represented by Term Frequency or combination of Term
Frequency and Inverse Document Frequency.

Term Frequency denoted by tf, is the number of occurrences
of a term t in the document D .
Document Frequency is the number of documents , where a
particular term t occurs.

Inverse Document Frequency of a term t, denoted by idf is
log(N/df), where N is the total number of documents in the
space. So, it reduces the weight when a term occurs many
times in a document, or in other words a word with rare
occurrences has more weight.
Vectors

tf-idf weight

The combination of tf and idf is the most popular weight
used in case of document similarity exercises.

tf-idf t,d = tf t,d * idf t

So, the weight is the highest when t occurs many times
within a small number of documents.
And, the weight is the lowest , when the term occurs fewer
times in a document or occurs in many documents.

Later, in the example you will see how tf-idf weights are
used in the Similarity calculation.
Similarity

Cosine Similarity
The similarity between two documents can be found by
computing the Cosine Similarity between their vector
representations.

             V(d1).V(d2)
sim(d1,d2) = ____________
             |V(d1)||V(d2)

 The numerator is a dot product of two products, such as

 ∑ i=1 to M (xi * yi), and the denominator is the product of the
Euclidean length of the vectors, such as
|V(d1)| = √ ∑ i=1 to M (xi )2
Similarity

For example,
If the vector d1 has component weights {w1,w2,w3} and
vector d2 has component weights {u1,u2},
then the dot product = w1*u1 + w2*u2 .
Since there is no third component, hence w3*ф = 0.

Euclidean length of d1 = √ (w1)2 + (w2)2 + (w3)2
Example
    This is a famous example given by Dr. David Grossman and Dr. Ophir
    Frieder of the Illinois Institute of Technology.
    There are 3 documents,
    D1 = “Shipment of gold damaged in a fire”
    D2 = “Delivery of silver arrived in a silver truck”
    D3 = “Shipment of gold arrived in a truck”
    Q = “gold silver truck”
    No. of docs, D = 3 ; Inverse document frequency, IDF = log(D/dfi)
Terms                   tfi                                                                         Weights = tfi * IDFi
           Q       D1       D2       D3       dfi       D/dfi         IDFi            Q            D1            D2            D3
a              0        1        1        1         3            1           0.0000       0.0000        0.0000        0.0000        0.0000
arrived        0        0        1        1         2           1.5          0.1761       0.0000        0.0000        0.1761        0.1761
damaged        0        1        0        0         1            3           0.4771       0.0000        0.4771        0.0000        0.0000
delivery       0        0        1        0         1            3           0.4771       0.0000        0.0000        0.4771        0.0000
gold           1        1        0        1         2           1.5          0.1761       0.1761        0.1761        0.0000        0.1761
fire           0        1        0        0         1            3           0.4771       0.0000        0.4771        0.0000        0.0000
in             0        1        1        1         3            1           0.0000       0.0000        0.0000        0.0000        0.0000
of             0        1        1        1         3            1           0.0000       0.0000        0.0000        0.0000        0.0000
shipment       0        1        0        1         2           1.5          0.1761       0.0000        0.1761        0.0000        0.1761
silver         1        0        2        0         1            3           0.4771       0.4771        0.0000        0.9542        0.0000
truck          1        0        1        1         2           1.5          0.1761       0.1761        0.0000        0.1761        0.1761
Example … continued
Similarity Analysis……
We calculate the vector lengths,
|D| = √ ∑i(wi,j)2

which is the Euclidean length of the vector

|D1| = √(0.4771)2 + (0.1761)2 + (0.4771)2 + (0.17761)2 = √0.5173 = 0.7192
|D2| = √(0.1761)2 + (0.4771)2 + (0.9542)2 + (0.1761)2 = √1.2001 = 1.0955
|D3| = √(0.1761)2 + √(0.1761)2 + √(0.1761)2 + √(0.1761)2 = √0.1240 = 0.3522

|Q| = √ (0.1761)2 + (0.4771)2 + √(0.1761)2 = √0.2896 = 0.5382

Next, we calculate the Dot products of the Query vector with each Document
vector, Q • Di = √ (wQ,j * wi,j )

Q • D1 = 0.1761 * 0.1761 = 0.0310
Q • D2 = 0.4771*0.9542 + 0.1761*0.1761 = 0.4862
Q • D3 = 0.1761*0.1761 + 0.1761*0.1761 = 0.0620
Example … continued
Now, we calculate the cosine value,

Cosine θ (d1) = Q • D1 /|Q|*|D1| = 0.0310/(0.5382 * 0.7192) = 0.0801
Cosine θ (d2) = Q • D2 /|Q|*|D2| = 0.4862/(0.5382 * 1.0955) = 0.8246
Cosine θ (d3) = Q • D3 /|Q|*|D3| = 0.0620/(0.5382 * 0.3522) = 0.3271

So, we see that document D2 is the most similar to the Query.
Conclusion
Pros
• Allows documents with partial match to be also identified
• The cosine formula gives a score which can be used to order
   documents.

Disadvantages
• Documents are treated as bag of words and so the positional
   information about the terms is lost.


Usage
  Apache Lucene, the text search api uses this concept while searching
for documents matching a query.
Acknowledgements
•   An Introduction to Information Retrieval by Christopher D. Manning,
    Prabhakar Raghavan, Hinrich Schutze.
•   Term Vector Theory and Keyword Weights by Dr. E. Garcia.
•   Information Retrieval: Algorithms and Heuristics by Dr. David
    Grossman and Dr. Ophir Frieder of the Illinois Institute of Technology
•   Wikipedia - http://en.wikipedia.org/wiki/Vector_space_model

More Related Content

What's hot

Data mining project presentation
Data mining project presentationData mining project presentation
Data mining project presentation
Kaiwen Qi
 

What's hot (20)

PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
 
A Simple Introduction to Word Embeddings
A Simple Introduction to Word EmbeddingsA Simple Introduction to Word Embeddings
A Simple Introduction to Word Embeddings
 
Deep Learning Tutorial
Deep Learning TutorialDeep Learning Tutorial
Deep Learning Tutorial
 
4.2 spatial data mining
4.2 spatial data mining4.2 spatial data mining
4.2 spatial data mining
 
Data preprocessing using Machine Learning
Data  preprocessing using Machine Learning Data  preprocessing using Machine Learning
Data preprocessing using Machine Learning
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Introduction to CNN
Introduction to CNNIntroduction to CNN
Introduction to CNN
 
Hierachical clustering
Hierachical clusteringHierachical clustering
Hierachical clustering
 
Tutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial NetworksTutorial on Theory and Application of Generative Adversarial Networks
Tutorial on Theory and Application of Generative Adversarial Networks
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Data mining project presentation
Data mining project presentationData mining project presentation
Data mining project presentation
 
Confusion and Diffusion.pptx
Confusion and Diffusion.pptxConfusion and Diffusion.pptx
Confusion and Diffusion.pptx
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Software Architecture - Quiz Questions
Software Architecture - Quiz QuestionsSoftware Architecture - Quiz Questions
Software Architecture - Quiz Questions
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
 
LSTM Based Sentiment Analysis
LSTM Based Sentiment AnalysisLSTM Based Sentiment Analysis
LSTM Based Sentiment Analysis
 
Introduction For seq2seq(sequence to sequence) and RNN
Introduction For seq2seq(sequence to sequence) and RNNIntroduction For seq2seq(sequence to sequence) and RNN
Introduction For seq2seq(sequence to sequence) and RNN
 
Density Based Clustering
Density Based ClusteringDensity Based Clustering
Density Based Clustering
 
Link Analysis
Link AnalysisLink Analysis
Link Analysis
 

Viewers also liked

Teacher management system guide
Teacher management system guideTeacher management system guide
Teacher management system guide
nicolasmunozvera
 
Evaluation in Information Retrieval
Evaluation in Information RetrievalEvaluation in Information Retrieval
Evaluation in Information Retrieval
Dishant Ailawadi
 
Computer networking short_questions_and_answers
Computer networking short_questions_and_answersComputer networking short_questions_and_answers
Computer networking short_questions_and_answers
Tarun Thakur
 

Viewers also liked (16)

Teacher management system guide
Teacher management system guideTeacher management system guide
Teacher management system guide
 
Cisco router command configuration overview
Cisco router command configuration overviewCisco router command configuration overview
Cisco router command configuration overview
 
Day 5.3 configuration of router
Day 5.3 configuration of routerDay 5.3 configuration of router
Day 5.3 configuration of router
 
De-Risk Data Center Projects With Cisco Services
De-Risk Data Center Projects With Cisco ServicesDe-Risk Data Center Projects With Cisco Services
De-Risk Data Center Projects With Cisco Services
 
MICAI 2013 Tutorial Slides - Measuring the Similarity and Relatedness of Conc...
MICAI 2013 Tutorial Slides - Measuring the Similarity and Relatedness of Conc...MICAI 2013 Tutorial Slides - Measuring the Similarity and Relatedness of Conc...
MICAI 2013 Tutorial Slides - Measuring the Similarity and Relatedness of Conc...
 
similarity measure
similarity measure similarity measure
similarity measure
 
Day 11 eigrp
Day 11 eigrpDay 11 eigrp
Day 11 eigrp
 
Lesson 1 slideshow
Lesson 1 slideshowLesson 1 slideshow
Lesson 1 slideshow
 
Evaluation in Information Retrieval
Evaluation in Information RetrievalEvaluation in Information Retrieval
Evaluation in Information Retrieval
 
MikroTik Basic Training Class - Online Moduls - English
 MikroTik Basic Training Class - Online Moduls - English MikroTik Basic Training Class - Online Moduls - English
MikroTik Basic Training Class - Online Moduls - English
 
E s switch_v6_ch01
E s switch_v6_ch01E s switch_v6_ch01
E s switch_v6_ch01
 
Computer networking short_questions_and_answers
Computer networking short_questions_and_answersComputer networking short_questions_and_answers
Computer networking short_questions_and_answers
 
College Network
College NetworkCollege Network
College Network
 
Initial Configuration of Router
Initial Configuration of RouterInitial Configuration of Router
Initial Configuration of Router
 
Pass4sure 640-864 Questions Answers
Pass4sure 640-864 Questions AnswersPass4sure 640-864 Questions Answers
Pass4sure 640-864 Questions Answers
 
10 More Quotes for Entrepreneurs
10 More Quotes for Entrepreneurs10 More Quotes for Entrepreneurs
10 More Quotes for Entrepreneurs
 

Similar to Document similarity with vector space model

Cosine tf idf_example
Cosine tf idf_exampleCosine tf idf_example
Cosine tf idf_example
hellangel13
 
Concept of-complex-frequency
Concept of-complex-frequencyConcept of-complex-frequency
Concept of-complex-frequency
Vishal Thakur
 

Similar to Document similarity with vector space model (17)

Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
1573 measuring arclength
1573 measuring arclength1573 measuring arclength
1573 measuring arclength
 
Chapter 03 cyclic codes
Chapter 03   cyclic codesChapter 03   cyclic codes
Chapter 03 cyclic codes
 
Solution 2 i ph o 37
Solution 2 i ph o 37Solution 2 i ph o 37
Solution 2 i ph o 37
 
Lesson 7: The Derivative
Lesson 7: The DerivativeLesson 7: The Derivative
Lesson 7: The Derivative
 
Cosine tf idf_example
Cosine tf idf_exampleCosine tf idf_example
Cosine tf idf_example
 
Calculus 1
Calculus 1Calculus 1
Calculus 1
 
Vectors and Kinematics
Vectors and KinematicsVectors and Kinematics
Vectors and Kinematics
 
Concept of-complex-frequency
Concept of-complex-frequencyConcept of-complex-frequency
Concept of-complex-frequency
 
Chapter 4: Vector Spaces - Part 5/Slides By Pearson
Chapter 4: Vector Spaces - Part 5/Slides By PearsonChapter 4: Vector Spaces - Part 5/Slides By Pearson
Chapter 4: Vector Spaces - Part 5/Slides By Pearson
 
Succesive differntiation
Succesive differntiationSuccesive differntiation
Succesive differntiation
 
linear transformation and rank nullity theorem
linear transformation and rank nullity theorem linear transformation and rank nullity theorem
linear transformation and rank nullity theorem
 
Application of Calculus in Real World
Application of Calculus in Real World Application of Calculus in Real World
Application of Calculus in Real World
 
Calculus Early Transcendentals 10th Edition Anton Solutions Manual
Calculus Early Transcendentals 10th Edition Anton Solutions ManualCalculus Early Transcendentals 10th Edition Anton Solutions Manual
Calculus Early Transcendentals 10th Edition Anton Solutions Manual
 
VHDL and Cordic Algorithim
VHDL and Cordic AlgorithimVHDL and Cordic Algorithim
VHDL and Cordic Algorithim
 
Weight enumerators of block codes and the mc williams
Weight  enumerators of block codes and  the mc williamsWeight  enumerators of block codes and  the mc williams
Weight enumerators of block codes and the mc williams
 
3 grechnikov
3 grechnikov3 grechnikov
3 grechnikov
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 

Document similarity with vector space model

  • 1. SIMILARITY OF DOCUMENTS BASED ON VECTOR SPACE MODEL
  • 2. Introduction This presentation gives an overview about the problem of finding documents which are similar and how Vector space can be used to solve it. A vector space is a mathematical structure formed by a collection of elements called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars in this context. A document is a bag of words or a collection of words or terms. The problem can be easily experienced in the domain of web search or classification, where the aim is to find out documents which are similar in context or content.
  • 3. Introduction A vector v can be expressed as a sum of elements such as, v = a1vi1+a2vi2+….+anvin Where ak are called scalars or weights and vin as the components or elements.
  • 4. Vectors Now we explore, how a set of documents can be represented as vectors in a common vector space. V(d) denotes the vector derived from document d, with one component for each dictionary term. t1 V(d2) V(Q) V(d1) θ t2 The documents in a collection can be viewed as a set of vectors in vector space, in which there is one axis for every term.
  • 5. Vectors In the previous slide, the diagram shows a simple representation of two document vectors - d1, d2 and a query vector Q. The space contains terms – {t1,t2,t3,…tN}, but for simplicity only two terms are represented since there is a axis for each term. The document d1 has components {t1,t3,…} and d2 has components {t2,…}. So V(d1) is represented closer to axis t1 and V(d2) is closer to t2. The angle θ represents the closeness of a document vector to the query vector. And its value is calculated by cosine of θ.
  • 6. Vectors Weights The weight of the components of a document vector can be represented by Term Frequency or combination of Term Frequency and Inverse Document Frequency. Term Frequency denoted by tf, is the number of occurrences of a term t in the document D . Document Frequency is the number of documents , where a particular term t occurs. Inverse Document Frequency of a term t, denoted by idf is log(N/df), where N is the total number of documents in the space. So, it reduces the weight when a term occurs many times in a document, or in other words a word with rare occurrences has more weight.
  • 7. Vectors tf-idf weight The combination of tf and idf is the most popular weight used in case of document similarity exercises. tf-idf t,d = tf t,d * idf t So, the weight is the highest when t occurs many times within a small number of documents. And, the weight is the lowest , when the term occurs fewer times in a document or occurs in many documents. Later, in the example you will see how tf-idf weights are used in the Similarity calculation.
  • 8. Similarity Cosine Similarity The similarity between two documents can be found by computing the Cosine Similarity between their vector representations. V(d1).V(d2) sim(d1,d2) = ____________ |V(d1)||V(d2) The numerator is a dot product of two products, such as ∑ i=1 to M (xi * yi), and the denominator is the product of the Euclidean length of the vectors, such as |V(d1)| = √ ∑ i=1 to M (xi )2
  • 9. Similarity For example, If the vector d1 has component weights {w1,w2,w3} and vector d2 has component weights {u1,u2}, then the dot product = w1*u1 + w2*u2 . Since there is no third component, hence w3*ф = 0. Euclidean length of d1 = √ (w1)2 + (w2)2 + (w3)2
  • 10. Example This is a famous example given by Dr. David Grossman and Dr. Ophir Frieder of the Illinois Institute of Technology. There are 3 documents, D1 = “Shipment of gold damaged in a fire” D2 = “Delivery of silver arrived in a silver truck” D3 = “Shipment of gold arrived in a truck” Q = “gold silver truck” No. of docs, D = 3 ; Inverse document frequency, IDF = log(D/dfi) Terms tfi Weights = tfi * IDFi Q D1 D2 D3 dfi D/dfi IDFi Q D1 D2 D3 a 0 1 1 1 3 1 0.0000 0.0000 0.0000 0.0000 0.0000 arrived 0 0 1 1 2 1.5 0.1761 0.0000 0.0000 0.1761 0.1761 damaged 0 1 0 0 1 3 0.4771 0.0000 0.4771 0.0000 0.0000 delivery 0 0 1 0 1 3 0.4771 0.0000 0.0000 0.4771 0.0000 gold 1 1 0 1 2 1.5 0.1761 0.1761 0.1761 0.0000 0.1761 fire 0 1 0 0 1 3 0.4771 0.0000 0.4771 0.0000 0.0000 in 0 1 1 1 3 1 0.0000 0.0000 0.0000 0.0000 0.0000 of 0 1 1 1 3 1 0.0000 0.0000 0.0000 0.0000 0.0000 shipment 0 1 0 1 2 1.5 0.1761 0.0000 0.1761 0.0000 0.1761 silver 1 0 2 0 1 3 0.4771 0.4771 0.0000 0.9542 0.0000 truck 1 0 1 1 2 1.5 0.1761 0.1761 0.0000 0.1761 0.1761
  • 11. Example … continued Similarity Analysis…… We calculate the vector lengths, |D| = √ ∑i(wi,j)2 which is the Euclidean length of the vector |D1| = √(0.4771)2 + (0.1761)2 + (0.4771)2 + (0.17761)2 = √0.5173 = 0.7192 |D2| = √(0.1761)2 + (0.4771)2 + (0.9542)2 + (0.1761)2 = √1.2001 = 1.0955 |D3| = √(0.1761)2 + √(0.1761)2 + √(0.1761)2 + √(0.1761)2 = √0.1240 = 0.3522 |Q| = √ (0.1761)2 + (0.4771)2 + √(0.1761)2 = √0.2896 = 0.5382 Next, we calculate the Dot products of the Query vector with each Document vector, Q • Di = √ (wQ,j * wi,j ) Q • D1 = 0.1761 * 0.1761 = 0.0310 Q • D2 = 0.4771*0.9542 + 0.1761*0.1761 = 0.4862 Q • D3 = 0.1761*0.1761 + 0.1761*0.1761 = 0.0620
  • 12. Example … continued Now, we calculate the cosine value, Cosine θ (d1) = Q • D1 /|Q|*|D1| = 0.0310/(0.5382 * 0.7192) = 0.0801 Cosine θ (d2) = Q • D2 /|Q|*|D2| = 0.4862/(0.5382 * 1.0955) = 0.8246 Cosine θ (d3) = Q • D3 /|Q|*|D3| = 0.0620/(0.5382 * 0.3522) = 0.3271 So, we see that document D2 is the most similar to the Query.
  • 13. Conclusion Pros • Allows documents with partial match to be also identified • The cosine formula gives a score which can be used to order documents. Disadvantages • Documents are treated as bag of words and so the positional information about the terms is lost. Usage Apache Lucene, the text search api uses this concept while searching for documents matching a query.
  • 14. Acknowledgements • An Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, Hinrich Schutze. • Term Vector Theory and Keyword Weights by Dr. E. Garcia. • Information Retrieval: Algorithms and Heuristics by Dr. David Grossman and Dr. Ophir Frieder of the Illinois Institute of Technology • Wikipedia - http://en.wikipedia.org/wiki/Vector_space_model