SlideShare uma empresa Scribd logo
1 de 24
Bhaskar Mitra, Microsoft (Bing Sciences)
Check out the full tutorial:
https://arxiv.org/abs/1705.01509
The value of science is not to
make things complex, but to
find the inherent simplicity.
- Frank Seide
Vector Space Models
Represent an item (e.g., word) as a vector of numbers.
0 1 0 1 0 0 2 0 1 0 1 0banana
Vector Space Models
Represent an item (e.g., word) as a vector of numbers.
The vector can correspond to documents in which the word occurs.
0 1 0 1 0 0 2 0 1 0 1 0banana
Doc7 Doc9Doc2 Doc4 Doc11
Vector Space Models
Represent an item (e.g., word) as a vector of numbers.
The vector can correspond to neighboring word context.
e.g., “yellow banana grows on trees in africa”
0 1 0 1 0 0 2 0 1 0 1 0banana
(grows, +1) (tree, +3)(yellow, -1) (on, +2) (africa, +5)
+1 +3-1 +2 +50 +4
Vector Space Models
Represent an item (e.g., word) as a vector of numbers.
The vector can correspond to character trigrams in the word.
0 1 0 1 0 0 2 0 1 0 1 0banana
ana nan#ba na# ban
Notions of Relatedness
Comparing two vectors (e.g., using cosine similarity) estimates how
similar the two words are. However, the notion of relatedness
depends on what vector representation you have chosen for the
words.
or
seattle similar to denver?
Because they are both cities.
seattle similar to seahawks?
Because “Seattle Seahawks”.
(Go Seahawks!)
Important note: In previous slides I showed raw counts. They should either be
normalized (e.g., using pointwise-mutual information) or (matrix) factorized. More on
Let’s consider the following example…
We have four (tiny) documents,
Document 1 : “seattle seahawks jerseys”
Document 2 : “seattle seahawks highlights”
Document 3 : “denver broncos jerseys”
Document 4 : “denver broncos highlights”
If we use document occurrence vectors…
1 1 0 0seattle
Document 1 Document 3
Document 2 Document 4
1 1 0 0seahawks
0 0 1 1denver
0 0 1 1broncos
similar
similar
In the rest of this talk, we refer to this notion of relatedness as Topical similarity.
If we use word context vectors…
0 2 0 0 0 1 0 1seattle
(seattle, -1) (denver, -1)
(seahawks, +1) (broncos, +1)
(jerseys, + 1)
(jerseys, + 2)
(highlights, +1)
(highlights, +2)
2 0 0 0 1 0 1 0seahawks
0 0 0 2 0 1 0 1denver
0 0 2 0 1 0 1 0broncos
similar
similar
In the rest of this talk, we refer to this notion of relatedness as Typical (by-type)
If we use character trigram vectors…
This notion of relatedness is similar to string edit-distance.
1 1 0 1 0 1 1 1seattle
#se set
sea eat
ett
att
ttl
tle
1 0 1 0 1 0 1 1settle
1
1
le#
similar
DIY: Learning Word Types
Take a sentence or query corpus and extract
Word-Context pairs, where Context is the
<neighbouring word, distance> tuple.
Compute (Positive) Pointwise Mutual
Information for every Word-Context pair.
Compute the cosine similarity between the
context score vectors to estimate word
similarity by type.
Word Analogy Task
man is to woman as king is to ?
good is to best as smart is to ?
china is to beijing as russia is to ?
Turns out the word-context based vector model we just learnt is
good for such analogy tasks,
[king] – [man] + [woman] ≈ [queen]
Levy, Goldberg, and Israel, Linguistic Regularities in Sparse and Explicit Word Representations, CoNLL. 2014.
Embeddings
The vectors we have been
discussing so far are very high-
dimensional (thousands, or even
millions) and sparse.
But there are techniques to learn
lower-dimensional dense vectors
for words using the same
intuitions.
These dense vectors are called
embeddings.
Learning Dense Embeddings
Matrix Factorization
Factorize word-context matrix.
E.g.,
LDA (Word-Document),
GloVe (Word-NeighboringWord)
Neural Networks
A neural network with a bottleneck, word and
context as input and output respectively.
E.g.,
Word2vec (Word-NeighboringWord)
Context1 Context1 …. Context
k
Word1
Word2
⁞
Wordn
Deerwester, Dumais, Landauer, Furnas, and Harshman, Indexing by latent semantic analysis, JASIS, 1990.
Pennington, Socher, and Manning, GloVe: Global Vectors for Word Representation, EMNLP, 2014.
Mikolov, Sutskever, Chen, Corrado, and Dean, Distributed representations of words and phrases and their compositionality, NIPS, 2013.
Exercise
Both Word2vec and GloVe define context as the neighboring word
only, without considering the distance from the current word.
How does this change the relationship that is learnt by the
embedding space?
How do word analogies work?
Visually, the vector {china → beijing}
turns out to be almost parallel to the
vector {russia → moscow}.
But if you aren’t queasy about reading
a lot of equations, read the following
paper…
Arora, et al. RAND-WALK: A Latent Variable
Model Approach to Word Embeddings, 2015.
Mikolov, Sutskever, Chen, Corrado, and Dean, Distributed
representations of words and phrases and their compositionality, NIPS,
2013.
Word embeddings for Document Ranking
Traditional IR uses Term matching,
→ # of times the doc says Albuquerque
We can use word embeddings to
compare all-pairs of query-document
terms,
→ # of terms in the doc that relate to
Albuquerque
Passage about Albuquerque
Passage not about Albuquerque
Nalisnick, Mitra, Craswell, and Caruana, Improving Document Ranking with Dual Word Embeddings, in WWW, 2016.
Mitra, Nalisnick, Craswell, and Caruana, A Dual Embedding Space Model for Document Ranking, arXiv:1602.01137, 2016
Beyond words…
Deep Semantic Similarity Model (DSSM) trains on multi-word short-text. Like with word
embeddings, you can train them to capture either Typical or Topical relationships.
Huang, Po-Sen, et al., Learning deep structured semantic models for web search using clickthrough data, CIKM, 2013.
Mitra and Craswell, Query Auto-Completion for Rare Prefixes, in CIKM, 2015.
What’s next?
Train your own or use a pre-trained embedding
Word2vec
Word2vec trained on queries
GloVe
DSSM
Get your hands dirty and try to build some fun demos!
Remember these are exciting times…
Fang et. al., From Captions to Visual Concepts and Back, CVPR, 2015.
Vinyals et. al., A Neural Conversational Model, ICML, 2015.
Thank you for listening!
Neu-IR 2016
The SIGIR 2016 Workshop on
Neural Information Retrieval
July 21st, 2016
Pisa, Tuscany, Italy
http://research.microsoft.com/neuir2016
https://twitter.com/neuir2016
(Call for Participation)
W. Bruce Croft
University of Massachusetts
Amherst, US
Jiafeng Guo
Chinese Academy of Sciences
Beijing, China
Maarten de Rijke
University of Amsterdam
Amsterdam, The Netherlands
Bhaskar Mitra
Bing, Microsoft
Cambridge, UK
Nick Craswell
Bing, Microsoft
Bellevue, US
Organizers

Mais conteúdo relacionado

Mais procurados

Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
Yasir Khan
 

Mais procurados (20)

Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
 
Word2Vec
Word2VecWord2Vec
Word2Vec
 
An introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERTAn introduction to the Transformers architecture and BERT
An introduction to the Transformers architecture and BERT
 
Nlp
NlpNlp
Nlp
 
Introduction to Natural Language Processing
Introduction to Natural Language ProcessingIntroduction to Natural Language Processing
Introduction to Natural Language Processing
 
Text similarity measures
Text similarity measuresText similarity measures
Text similarity measures
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Natural lanaguage processing
Natural lanaguage processingNatural lanaguage processing
Natural lanaguage processing
 
Natural language procssing
Natural language procssing Natural language procssing
Natural language procssing
 
Nlp toolkits and_preprocessing_techniques
Nlp toolkits and_preprocessing_techniquesNlp toolkits and_preprocessing_techniques
Nlp toolkits and_preprocessing_techniques
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)
 
Word embeddings
Word embeddingsWord embeddings
Word embeddings
 
bag-of-words models
bag-of-words models bag-of-words models
bag-of-words models
 
Introduction to natural language processing
Introduction to natural language processingIntroduction to natural language processing
Introduction to natural language processing
 
Natural language processing (NLP) introduction
Natural language processing (NLP) introductionNatural language processing (NLP) introduction
Natural language processing (NLP) introduction
 
NLP
NLPNLP
NLP
 
Nlp ambiguity presentation
Nlp ambiguity presentationNlp ambiguity presentation
Nlp ambiguity presentation
 
Introduction to natural language processing (NLP)
Introduction to natural language processing (NLP)Introduction to natural language processing (NLP)
Introduction to natural language processing (NLP)
 
Sequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningSequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learning
 
Bert
BertBert
Bert
 

Destaque

Exploring Session Context using Distributed Representations of Queries and Re...
Exploring Session Context using Distributed Representations of Queries and Re...Exploring Session Context using Distributed Representations of Queries and Re...
Exploring Session Context using Distributed Representations of Queries and Re...
Bhaskar Mitra
 

Destaque (20)

Using Text Embeddings for Information Retrieval
Using Text Embeddings for Information RetrievalUsing Text Embeddings for Information Retrieval
Using Text Embeddings for Information Retrieval
 
Suneel Marthi - Deep Learning with Apache Flink and DL4J
Suneel Marthi - Deep Learning with Apache Flink and DL4JSuneel Marthi - Deep Learning with Apache Flink and DL4J
Suneel Marthi - Deep Learning with Apache Flink and DL4J
 
Zhongyuan Zhu - 2015 - Evaluating Neural Machine Translation in English-Japan...
Zhongyuan Zhu - 2015 - Evaluating Neural Machine Translation in English-Japan...Zhongyuan Zhu - 2015 - Evaluating Neural Machine Translation in English-Japan...
Zhongyuan Zhu - 2015 - Evaluating Neural Machine Translation in English-Japan...
 
Cs231n 2017 lecture10 Recurrent Neural Networks
Cs231n 2017 lecture10 Recurrent Neural NetworksCs231n 2017 lecture10 Recurrent Neural Networks
Cs231n 2017 lecture10 Recurrent Neural Networks
 
Technological Unemployment and the Robo-Economy
Technological Unemployment and the Robo-EconomyTechnological Unemployment and the Robo-Economy
Technological Unemployment and the Robo-Economy
 
Matthew Marge - 2017 - Exploring Variation of Natural Human Commands to a Rob...
Matthew Marge - 2017 - Exploring Variation of Natural Human Commands to a Rob...Matthew Marge - 2017 - Exploring Variation of Natural Human Commands to a Rob...
Matthew Marge - 2017 - Exploring Variation of Natural Human Commands to a Rob...
 
State of Blockchain 2017: Smartnetworks and the Blockchain Economy
State of Blockchain 2017:  Smartnetworks and the Blockchain EconomyState of Blockchain 2017:  Smartnetworks and the Blockchain Economy
State of Blockchain 2017: Smartnetworks and the Blockchain Economy
 
Exploring Session Context using Distributed Representations of Queries and Re...
Exploring Session Context using Distributed Representations of Queries and Re...Exploring Session Context using Distributed Representations of Queries and Re...
Exploring Session Context using Distributed Representations of Queries and Re...
 
Hackathon 2014 NLP Hack
Hackathon 2014 NLP HackHackathon 2014 NLP Hack
Hackathon 2014 NLP Hack
 
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
 
Deep Learning in practice : Speech recognition and beyond - Meetup
Deep Learning in practice : Speech recognition and beyond - MeetupDeep Learning in practice : Speech recognition and beyond - Meetup
Deep Learning in practice : Speech recognition and beyond - Meetup
 
Venkatesh Duppada - 2017 - SeerNet at EmoInt-2017: Tweet Emotion Intensity Es...
Venkatesh Duppada - 2017 - SeerNet at EmoInt-2017: Tweet Emotion Intensity Es...Venkatesh Duppada - 2017 - SeerNet at EmoInt-2017: Tweet Emotion Intensity Es...
Venkatesh Duppada - 2017 - SeerNet at EmoInt-2017: Tweet Emotion Intensity Es...
 
Vectorland: Brief Notes from Using Text Embeddings for Search
Vectorland: Brief Notes from Using Text Embeddings for SearchVectorland: Brief Notes from Using Text Embeddings for Search
Vectorland: Brief Notes from Using Text Embeddings for Search
 
Recommender Systems, Matrices and Graphs
Recommender Systems, Matrices and GraphsRecommender Systems, Matrices and Graphs
Recommender Systems, Matrices and Graphs
 
Neural Models for Document Ranking
Neural Models for Document RankingNeural Models for Document Ranking
Neural Models for Document Ranking
 
Deep Learning for Chatbot (3/4)
Deep Learning for Chatbot (3/4)Deep Learning for Chatbot (3/4)
Deep Learning for Chatbot (3/4)
 
Deep Learning Explained
Deep Learning ExplainedDeep Learning Explained
Deep Learning Explained
 
Chenchen Ding - 2015 - NICT at WAT 2015
Chenchen Ding - 2015 - NICT at WAT 2015Chenchen Ding - 2015 - NICT at WAT 2015
Chenchen Ding - 2015 - NICT at WAT 2015
 
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
John Richardson - 2015 - KyotoEBMT System Description for the 2nd Workshop on...
 
Advanced Node.JS Meetup
Advanced Node.JS MeetupAdvanced Node.JS Meetup
Advanced Node.JS Meetup
 

Semelhante a A Simple Introduction to Word Embeddings

5 Lessons Learned from Designing Neural Models for Information Retrieval
5 Lessons Learned from Designing Neural Models for Information Retrieval5 Lessons Learned from Designing Neural Models for Information Retrieval
5 Lessons Learned from Designing Neural Models for Information Retrieval
Bhaskar Mitra
 
Continuous bag of words cbow word2vec word embedding work .pdf
Continuous bag of words cbow word2vec word embedding work .pdfContinuous bag of words cbow word2vec word embedding work .pdf
Continuous bag of words cbow word2vec word embedding work .pdf
devangmittal4
 
Copy of 10text (2)
Copy of 10text (2)Copy of 10text (2)
Copy of 10text (2)
Uma Se
 
The Geometry of Learning
The Geometry of LearningThe Geometry of Learning
The Geometry of Learning
fridolin.wild
 

Semelhante a A Simple Introduction to Word Embeddings (20)

Designing, Visualizing and Understanding Deep Neural Networks
Designing, Visualizing and Understanding Deep Neural NetworksDesigning, Visualizing and Understanding Deep Neural Networks
Designing, Visualizing and Understanding Deep Neural Networks
 
Neural Text Embeddings for Information Retrieval (WSDM 2017)
Neural Text Embeddings for Information Retrieval (WSDM 2017)Neural Text Embeddings for Information Retrieval (WSDM 2017)
Neural Text Embeddings for Information Retrieval (WSDM 2017)
 
Using topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic searchUsing topic modelling frameworks for NLP and semantic search
Using topic modelling frameworks for NLP and semantic search
 
5 Lessons Learned from Designing Neural Models for Information Retrieval
5 Lessons Learned from Designing Neural Models for Information Retrieval5 Lessons Learned from Designing Neural Models for Information Retrieval
5 Lessons Learned from Designing Neural Models for Information Retrieval
 
Deep Learning for Search
Deep Learning for SearchDeep Learning for Search
Deep Learning for Search
 
Pycon ke word vectors
Pycon ke   word vectorsPycon ke   word vectors
Pycon ke word vectors
 
Deploying Semantic Technologies for Digital Publishing: A Case Study from Log...
Deploying Semantic Technologies for Digital Publishing: A Case Study from Log...Deploying Semantic Technologies for Digital Publishing: A Case Study from Log...
Deploying Semantic Technologies for Digital Publishing: A Case Study from Log...
 
Continuous bag of words cbow word2vec word embedding work .pdf
Continuous bag of words cbow word2vec word embedding work .pdfContinuous bag of words cbow word2vec word embedding work .pdf
Continuous bag of words cbow word2vec word embedding work .pdf
 
Vectorization In NLP.pptx
Vectorization In NLP.pptxVectorization In NLP.pptx
Vectorization In NLP.pptx
 
Topic Modeling for Information Retrieval and Word Sense Disambiguation tasks
Topic Modeling for Information Retrieval and Word Sense Disambiguation tasksTopic Modeling for Information Retrieval and Word Sense Disambiguation tasks
Topic Modeling for Information Retrieval and Word Sense Disambiguation tasks
 
Copy of 10text (2)
Copy of 10text (2)Copy of 10text (2)
Copy of 10text (2)
 
Web and text
Web and textWeb and text
Web and text
 
Chapter 10 Data Mining Techniques
 Chapter 10 Data Mining Techniques Chapter 10 Data Mining Techniques
Chapter 10 Data Mining Techniques
 
The Duet model
The Duet modelThe Duet model
The Duet model
 
DATA641 Lecture 3 - Word meaning.pptx
DATA641 Lecture 3 - Word meaning.pptxDATA641 Lecture 3 - Word meaning.pptx
DATA641 Lecture 3 - Word meaning.pptx
 
DDH 2021-03-03: Text Processing and Searching in the Medical Domain
DDH 2021-03-03: Text Processing and Searching in the Medical DomainDDH 2021-03-03: Text Processing and Searching in the Medical Domain
DDH 2021-03-03: Text Processing and Searching in the Medical Domain
 
The Geometry of Learning
The Geometry of LearningThe Geometry of Learning
The Geometry of Learning
 
Hacking Human Language (PyData London)
Hacking Human Language (PyData London)Hacking Human Language (PyData London)
Hacking Human Language (PyData London)
 
Lecture 9 - Machine Learning and Support Vector Machines (SVM)
Lecture 9 - Machine Learning and Support Vector Machines (SVM)Lecture 9 - Machine Learning and Support Vector Machines (SVM)
Lecture 9 - Machine Learning and Support Vector Machines (SVM)
 
Eacl 2006 Pedersen
Eacl 2006 PedersenEacl 2006 Pedersen
Eacl 2006 Pedersen
 

Mais de Bhaskar Mitra

Efficient Machine Learning and Machine Learning for Efficiency in Information...
Efficient Machine Learning and Machine Learning for Efficiency in Information...Efficient Machine Learning and Machine Learning for Efficiency in Information...
Efficient Machine Learning and Machine Learning for Efficiency in Information...
Bhaskar Mitra
 
Multisided Exposure Fairness for Search and Recommendation
Multisided Exposure Fairness for Search and RecommendationMultisided Exposure Fairness for Search and Recommendation
Multisided Exposure Fairness for Search and Recommendation
Bhaskar Mitra
 
Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)
Bhaskar Mitra
 

Mais de Bhaskar Mitra (20)

Joint Multisided Exposure Fairness for Search and Recommendation
Joint Multisided Exposure Fairness for Search and RecommendationJoint Multisided Exposure Fairness for Search and Recommendation
Joint Multisided Exposure Fairness for Search and Recommendation
 
What’s next for deep learning for Search?
What’s next for deep learning for Search?What’s next for deep learning for Search?
What’s next for deep learning for Search?
 
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
So, You Want to Release a Dataset? Reflections on Benchmark Development, Comm...
 
Efficient Machine Learning and Machine Learning for Efficiency in Information...
Efficient Machine Learning and Machine Learning for Efficiency in Information...Efficient Machine Learning and Machine Learning for Efficiency in Information...
Efficient Machine Learning and Machine Learning for Efficiency in Information...
 
Multisided Exposure Fairness for Search and Recommendation
Multisided Exposure Fairness for Search and RecommendationMultisided Exposure Fairness for Search and Recommendation
Multisided Exposure Fairness for Search and Recommendation
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
 
Neural Information Retrieval: In search of meaningful progress
Neural Information Retrieval: In search of meaningful progressNeural Information Retrieval: In search of meaningful progress
Neural Information Retrieval: In search of meaningful progress
 
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning TrackConformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
Conformer-Kernel with Query Term Independence @ TREC 2020 Deep Learning Track
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
 
Duet @ TREC 2019 Deep Learning Track
Duet @ TREC 2019 Deep Learning TrackDuet @ TREC 2019 Deep Learning Track
Duet @ TREC 2019 Deep Learning Track
 
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and BeyondBenchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
Benchmarking for Neural Information Retrieval: MS MARCO, TREC, and Beyond
 
Deep Neural Methods for Retrieval
Deep Neural Methods for RetrievalDeep Neural Methods for Retrieval
Deep Neural Methods for Retrieval
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
 
Learning to Rank with Neural Networks
Learning to Rank with Neural NetworksLearning to Rank with Neural Networks
Learning to Rank with Neural Networks
 
Deep Learning for Search
Deep Learning for SearchDeep Learning for Search
Deep Learning for Search
 
Deep Learning for Search
Deep Learning for SearchDeep Learning for Search
Deep Learning for Search
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
 
Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)
 
Adversarial and reinforcement learning-based approaches to information retrieval
Adversarial and reinforcement learning-based approaches to information retrievalAdversarial and reinforcement learning-based approaches to information retrieval
Adversarial and reinforcement learning-based approaches to information retrieval
 
A Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalA Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information Retrieval
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 

A Simple Introduction to Word Embeddings

  • 1. Bhaskar Mitra, Microsoft (Bing Sciences)
  • 2. Check out the full tutorial: https://arxiv.org/abs/1705.01509
  • 3. The value of science is not to make things complex, but to find the inherent simplicity. - Frank Seide
  • 4. Vector Space Models Represent an item (e.g., word) as a vector of numbers. 0 1 0 1 0 0 2 0 1 0 1 0banana
  • 5. Vector Space Models Represent an item (e.g., word) as a vector of numbers. The vector can correspond to documents in which the word occurs. 0 1 0 1 0 0 2 0 1 0 1 0banana Doc7 Doc9Doc2 Doc4 Doc11
  • 6. Vector Space Models Represent an item (e.g., word) as a vector of numbers. The vector can correspond to neighboring word context. e.g., “yellow banana grows on trees in africa” 0 1 0 1 0 0 2 0 1 0 1 0banana (grows, +1) (tree, +3)(yellow, -1) (on, +2) (africa, +5) +1 +3-1 +2 +50 +4
  • 7. Vector Space Models Represent an item (e.g., word) as a vector of numbers. The vector can correspond to character trigrams in the word. 0 1 0 1 0 0 2 0 1 0 1 0banana ana nan#ba na# ban
  • 8. Notions of Relatedness Comparing two vectors (e.g., using cosine similarity) estimates how similar the two words are. However, the notion of relatedness depends on what vector representation you have chosen for the words. or seattle similar to denver? Because they are both cities. seattle similar to seahawks? Because “Seattle Seahawks”. (Go Seahawks!) Important note: In previous slides I showed raw counts. They should either be normalized (e.g., using pointwise-mutual information) or (matrix) factorized. More on
  • 9. Let’s consider the following example… We have four (tiny) documents, Document 1 : “seattle seahawks jerseys” Document 2 : “seattle seahawks highlights” Document 3 : “denver broncos jerseys” Document 4 : “denver broncos highlights”
  • 10. If we use document occurrence vectors… 1 1 0 0seattle Document 1 Document 3 Document 2 Document 4 1 1 0 0seahawks 0 0 1 1denver 0 0 1 1broncos similar similar In the rest of this talk, we refer to this notion of relatedness as Topical similarity.
  • 11. If we use word context vectors… 0 2 0 0 0 1 0 1seattle (seattle, -1) (denver, -1) (seahawks, +1) (broncos, +1) (jerseys, + 1) (jerseys, + 2) (highlights, +1) (highlights, +2) 2 0 0 0 1 0 1 0seahawks 0 0 0 2 0 1 0 1denver 0 0 2 0 1 0 1 0broncos similar similar In the rest of this talk, we refer to this notion of relatedness as Typical (by-type)
  • 12. If we use character trigram vectors… This notion of relatedness is similar to string edit-distance. 1 1 0 1 0 1 1 1seattle #se set sea eat ett att ttl tle 1 0 1 0 1 0 1 1settle 1 1 le# similar
  • 13. DIY: Learning Word Types Take a sentence or query corpus and extract Word-Context pairs, where Context is the <neighbouring word, distance> tuple. Compute (Positive) Pointwise Mutual Information for every Word-Context pair. Compute the cosine similarity between the context score vectors to estimate word similarity by type.
  • 14. Word Analogy Task man is to woman as king is to ? good is to best as smart is to ? china is to beijing as russia is to ? Turns out the word-context based vector model we just learnt is good for such analogy tasks, [king] – [man] + [woman] ≈ [queen] Levy, Goldberg, and Israel, Linguistic Regularities in Sparse and Explicit Word Representations, CoNLL. 2014.
  • 15. Embeddings The vectors we have been discussing so far are very high- dimensional (thousands, or even millions) and sparse. But there are techniques to learn lower-dimensional dense vectors for words using the same intuitions. These dense vectors are called embeddings.
  • 16. Learning Dense Embeddings Matrix Factorization Factorize word-context matrix. E.g., LDA (Word-Document), GloVe (Word-NeighboringWord) Neural Networks A neural network with a bottleneck, word and context as input and output respectively. E.g., Word2vec (Word-NeighboringWord) Context1 Context1 …. Context k Word1 Word2 ⁞ Wordn Deerwester, Dumais, Landauer, Furnas, and Harshman, Indexing by latent semantic analysis, JASIS, 1990. Pennington, Socher, and Manning, GloVe: Global Vectors for Word Representation, EMNLP, 2014. Mikolov, Sutskever, Chen, Corrado, and Dean, Distributed representations of words and phrases and their compositionality, NIPS, 2013.
  • 17. Exercise Both Word2vec and GloVe define context as the neighboring word only, without considering the distance from the current word. How does this change the relationship that is learnt by the embedding space?
  • 18. How do word analogies work? Visually, the vector {china → beijing} turns out to be almost parallel to the vector {russia → moscow}. But if you aren’t queasy about reading a lot of equations, read the following paper… Arora, et al. RAND-WALK: A Latent Variable Model Approach to Word Embeddings, 2015. Mikolov, Sutskever, Chen, Corrado, and Dean, Distributed representations of words and phrases and their compositionality, NIPS, 2013.
  • 19. Word embeddings for Document Ranking Traditional IR uses Term matching, → # of times the doc says Albuquerque We can use word embeddings to compare all-pairs of query-document terms, → # of terms in the doc that relate to Albuquerque Passage about Albuquerque Passage not about Albuquerque Nalisnick, Mitra, Craswell, and Caruana, Improving Document Ranking with Dual Word Embeddings, in WWW, 2016. Mitra, Nalisnick, Craswell, and Caruana, A Dual Embedding Space Model for Document Ranking, arXiv:1602.01137, 2016
  • 20. Beyond words… Deep Semantic Similarity Model (DSSM) trains on multi-word short-text. Like with word embeddings, you can train them to capture either Typical or Topical relationships. Huang, Po-Sen, et al., Learning deep structured semantic models for web search using clickthrough data, CIKM, 2013. Mitra and Craswell, Query Auto-Completion for Rare Prefixes, in CIKM, 2015.
  • 21. What’s next? Train your own or use a pre-trained embedding Word2vec Word2vec trained on queries GloVe DSSM Get your hands dirty and try to build some fun demos!
  • 22. Remember these are exciting times… Fang et. al., From Captions to Visual Concepts and Back, CVPR, 2015. Vinyals et. al., A Neural Conversational Model, ICML, 2015.
  • 23. Thank you for listening!
  • 24. Neu-IR 2016 The SIGIR 2016 Workshop on Neural Information Retrieval July 21st, 2016 Pisa, Tuscany, Italy http://research.microsoft.com/neuir2016 https://twitter.com/neuir2016 (Call for Participation) W. Bruce Croft University of Massachusetts Amherst, US Jiafeng Guo Chinese Academy of Sciences Beijing, China Maarten de Rijke University of Amsterdam Amsterdam, The Netherlands Bhaskar Mitra Bing, Microsoft Cambridge, UK Nick Craswell Bing, Microsoft Bellevue, US Organizers