This document summarizes Simon Hughes' presentation on using vector representations for semantic matching in search. It discusses using word embeddings to learn vector representations of words that capture their semantic meaning based on context. Approaches for searching with word embeddings include expanding queries with related terms from the embedding model or clustering the embeddings and mapping queries to clusters. The document also covers techniques for indexing and searching vector representations in an inverted index, such as using locality-sensitive hashing or k-means trees to map vectors to discrete tokens that can be indexed.
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
1. Vectors in Search
– Towards More Semantic Matching
Simon Hughes
Chief Data Scientist, Dice.com
@hughes_meister
#Activate18 #ActivateSearch
2. Who Am I?
• Chief Data Scientist at DHI (owns Dice.com)
• Key Projects:
• Search and Match
• Dice Recommender Systems
• Dice Job Search
• Dice Talent Search 3.0 and 4.0
• Dice Skill Center
• Dice Career Advisory Pages
• Dice Salary Predictor
• Dice Career Paths
• PhD Candidate DePaul University
• Subject Area – Machine Learning and NLP
• Thesis – Extracting Causal Relations from Scientific Essays
• Contact Info:
• Email: simon.hughes@dhigroupinc.com
• Twitter: https://twitter.com/hughes_meister
3. Motivation
• Dice.com - leading US technology professional job board
• Jobs marketplace
• We connect technology talent with employers
• High quality searching and matching are critical to our value
proposition, for both our customers and our clients
• Need – high quality content-based recommender engine
• Automatically determine how well a job seeker matches a particular position,
and vice versa
• Requirements:
• A semantic matching engine – goes beyond keyword search, to extracting
semantic information from job postings and resume
• Deployed at scale using existing search infrastructure (Solr and
ElasticSearch)
• Github Repository for Talk:
• https://github.com/DiceTechJobs/VectorsInSearch
4. Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
5. Understanding Textual Data
Key Challenges:
• Synonymy – Multiple Words with the Same Meaning
• Related – typos, miss-spellings, acronyms, metonyms
• E.g. QA, Quality Assurance, Tester
• Polysemy – Ambiguity, a word has multiple meanings
• E.g. Bank, Book, Ape
• Hypernyms/Hyponyms – ‘type of’ relationships
• E.g. a dog (hyponym) is a type of animal (hypernym)
• Meronyms/Holonyms – ‘part of’ relationships
• E.g. finger (meronym) is a ‘part of’ a hand (holonym)
• What Words / Phrases are More Important?
• Named Entity Extraction (NER), Controlled Vocabularies
• Colocation (phrases) detection – e.g. “data scientist” vs “scientist who works with data”
• Stop words
• Term weighting schemes - e.g. tf.idf
6. How to Solve these Problems?
• Map documents and queries to a semantic space
• “From Strings to Things”?
• Google KG marketing
• Map words into concepts / semantics
• From strings to concepts
• How to represent?
Java
Technologies
Big Data
Tools
Javascript
Frameworks
8. Representations
• Distributed Representation
• Dense vector
• Components of the vector represent learned concepts / latent variables
• Similar items have similar representations
• Most existing approaches produce dense vectors
Java
Java
• Local representation
• Non distributed
• Sparse
• E.g. one-hot-vector
• One vector component per unique word
• Similar items have different representations
9. Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
10. The Importance of Context
How do we learn the meaning (semantics) of words?
• Distributional Hypothesis
• Words occurring in similar contexts have similar meanings
• Harris 1954
• “a word is characterized by the company it keeps”
• Firth 1957
• Ignores word order, grammar and syntax
• Latent Relation Hypothesis
• Pairs of words occurring in similar patterns have similar semantic relations
• Turney et al, 2003
• Patterns – X cuts Y, X works with Y, etc
• Word order and grammatical relations matter
• Further reading - Distributional approaches to word meanings
11. Learning Meaning from Context
Bag of Words Approaches – ignore word order
• Latent Models
• Context - Documents
• LSA
• LDA
• Semantic Vector Space Model
• Word Embeddings
• Context – word window
• Word2vec
• Glove
• Simple linear language models
• History - http://blog.aylien.com/a-review-of-the-recent-history-of-natural-language-processing/
• For document embeddings
• Average or idf weighted average of word vectors
• Sentence / Document Embeddings
• Context – document + word window
• E.g. Doc2vec
• Context – surrounding sentences
• E.g. skip-thought vectors
12. Word2Vec
• By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ], from Wikimedia Commons
13. Limitations of BOW Approaches:
• Shallow representation
• Word embeddings – limited to the word level
• Latent models – document level but doesn’t encode relational
information
• Synonymy - learn relatedness, not true synonyms
• E.g. Antonyms have similar vectors
• Polysemy – cannot encode different meanings of same word
• Global model not a local model
14. Beyond BOW - Deep Language Models
• Deep Language Model Embeddings
• Derived from the internal state of a deep LM
• Learns deep representation of sequences of words in context
• Can adjust word vectors based on their current context
• “NLP’s imagenet moment”
• Achieved state of the art results on many NLP tasks
• Consistently out-perform word embedding models
• Example models - ELMO, ULMFit, OpenAI Transformer
• Used for encoding sentences not whole documents
• Hard to scale
15. Deep Language Models
p(w1,w2,w3, w4,…,wn) = p(wn|w1,w2,…,wn-1)
…..
…..
…..
p(w1) p(w2|W1) p(w3|w1,w2) p(w4|w1,w2,w3)
Begin w1 w2 w3
LSTM LSTM LSTM LSTM
16. Embedding Models for Search
• Word Embedding Approaches
• Cluster Word Embeddings
• “Representing Documents and Queries as Sets of Word Embedded Vectors for
Information Retrieval”
• Clustered word2vec vectors using k-means
• Documents represented as clusters of word vectors
• Query - map query vectors as similarity to cluster centroids
• Out performed Jelinek Mercer LM similarity using VSM
• Average Word Embeddings
• From Chapter 5 of Deep Learning for Search
• Author - Tommaso Teofili
• Query and document represented as average of word2vec vectors
• Computing a weighted average using idf worked best
• Outperformed BM25 using cosine similarity
• BM25 + word2vec – highest NDCG score
17. Embedding Models for Search
• Dual Embedding Space Model (DESM)
• Research from Microsoft
• Extends word2vec
• Learns a dual embedding for queries and documents
• Paper - https://arxiv.org/pdf/1602.01137.pdf
• Evaluation
• Compared BM25, LSA and DESM on Bing Query Log Data
• Metrics - NDCG@1, NDCG@3, NDCG@5
• Results
• LSA and DESM both out-performed BM25
• DESM out-performed LSA
• DESM + BM25 out-performed all other approaches
18. Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
19. Vectors in Search
• Dense Embedding Vector:
• Dense
• D dimensional
• D = 50-1000
• Inverted index:
• Sparse
• Pivoted by term
• V = Vocabulary
• |V| =100k+
• Fast because sparse
[+0.12, -0.34, -0.12, +0.27, +0.63]
Term Posting List
Java 1,5,100,102
.NET 2,4,600,605,1000
C# 2,88,105,800
SQL 130,433,648,899,1200
Html 1,2,10,30,55,202,252,30,598,
20. Searching with Word Embeddings
Approaches for using word embeddings:
• Top N terms
• Expand query using top n terms from model
• Boost expansions by cosine similarity
• Can use as a boost query, a re-rank query or a straight term expansion
• Q = “java developer”^10
OR ”java j2ee developer”^0.91 OR “java architect”^0.89
OR “lead java developer”^0.87 OR “j2ee developer”^0.86
OR “java engineer”^0.86
• Term Clustering
• Cluster embeddings using a clustering algorithm
• E.g. k-means
• Compute different sized clusters, k=100,1000,10000
• Map clusters to tokens and index
• Different fields for each k
• Larger k fields – bigger boost or rely on idf scoring
• Query expands to top clusters, boosted by similarity
• Q = “java developer”^10
OR cluster_k1000:5894^5
OR cluster_k100:23^2.5
OR cluster_k10:8^1.25
• See https://github.com/DiceTechJobs/ConceptualSearch
21. Searching Vectors – k-NN Search
• K-NN search
• Find the k closest neighbors to query vector according to similarity metric
• Usually cosine similarity or Euclidean distance
• Definitions
• D = number of components in the vector
• N = number of documents
• Brute Force Search:
• O(ND) = linear
• What if N AND/OR D is(are) very large?
• Vs. Inverted Index
• Sublinear
• Makes uses of sparsity of terms
• BTree or Distributed Hash Table lookup for terms, iterate posting list, re-rank matches
- O(n log n)
22. What is the Optimal Representation for a
Vector in an Inverted Index?
What properties would such a representation have?
• For Performance
• Sparse representation necessary to leverage inverted index
• For Relevancy
• Distributed representation
• Each document should be a collection of tokens
• Tokens represent some semantic feature of the space
• Similarity is preserved
• Similar vectors must also be similar under this new representation
• Zipfian distribution of tokens
• “We need a Zipfian Distribution” – John Berryman (Co-author of ‘Relevant
Search’)
• Tokenizing Embedding Spaces
23. Zipf’s Law
• The frequency of terms in
a corpus follow a power
law distribution
• Small number of tokens
are very common - filter
out irrelevant docs
• A large number of tokens
are very rare -
discriminate between
similar matches
• Distribution of last names - By Thekohser [CC BY-SA 3.0
(https://creativecommons.org/licenses/by-sa/3.0 )], from Wikimedia Commons
24. Approximate Nearest Neighbor Search
• Faster than full k-NN, with some loss in accuracy
• Approaches can be either:
• Data Dependent
• Learns and adjusts from the data
• Makes indexing new documents hard
• Data Independent
• Some Approaches:
• KD Tree
• LSH
• Heuristic Methods
• K-Means Tree
• Randomized KD Forest
• Paper: https://arxiv.org/abs/1603.09596
• HNSW (Hierarchical Navigable Small World Graphs – Top on http://ann-benchmarks.com/
• Paper: https://arxiv.org/pdf/1603.09320.pdf
• Vector Thresholding
• Choice of similarity metric is important in choosing an algorithm
25. KD Trees
• Construction
• Constructs a binary search tree by partitioning the search space along each vector dimension using the
dimensions
• Partitions are chosen orthogonal to each dimension
• Usually the median
• Querying
• Described here - https://en.wikipedia.org/wiki/K-d_tree#Complexity
• Limitations
• How to implement efficiently in an inverted index?
• Lucene 6.0 dimensional points
• See also - https://www.elastic.co/blog/lucene-points-6.0
• Not exposed in Solr and Elastic Search AFAIK
• Tree needs rebalancing on each insertion
• Curse of dimensionality
• N >> 2d
• For N points and D dimensions
• Complexity essentially linear for real world vectors (D>= 50)
• Approximate KNN Search
• Possible with KD tree – limit the number of searched nodes
• Typically out-performed by other ANNs approaches
26. Locality Sensitive Hashing
• LSH hashes items to discrete buckets
• More buckets – slower but more accurate
• Locality Preserving
• Maximizes the probability that similar items occupy the same buckets
• Random Projection LSH (sim Hash)
• LSH variant for cosine similarity
• Generate a random d-dimensional unit vector r, and for each vector v
• ℎ𝑎𝑠ℎ 𝑣 = 𝑠𝑖𝑔𝑛(𝑣. 𝑟)
• Produces a binary encoding, one bit for each hash function (random vector)
• Probability 2 vectors’ hashes match - proportional to cosine similarity
• Output of hash function can be indexed and searched using Hamming Distance
• Intuition - Van Durme and Lall -
http://www.cs.jhu.edu/~vandurme/papers/VanDurmeLallACL10-slides.pdf
• Data independent, although data dependent variations exist
• However, for real data, it is typically out-performed by heuristic methods
like k-means trees, and randomized KD-trees
• https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf
27. Encoding LSH Hash into the Index
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1]
• Hash into Bits
[“10110110100101”]
• Store hash fingerprint as a single token
["00_1","01_0","02_1","04_1","04_0","05_1","06_1","07_0","08_1","09_0","10_0","11_1","12_0","13
_1”]
• Store each bit as a token using it’s position and value
• Use mm parameter to speed up search
• Or store shingles of the binary tokens
• This is not sparse!
OR
29. K-Means Tree
• Hierarchical Clustering Algorithm
• Recursively partitions vector space using k-Means clustering
• Fast - k-means runs in linear time using Lloyd’s heuristic
• Most other clustering algorithms run in quadratic time or worse
• Tree Construction
• For some branching factor b create b clusters
• Create b nodes, store centroid for each node
• For each new cluster, cluster its members into b smaller clusters
• These form child nodes of their parent clusters, forming a tree structure
• Continue until < b members per cluster
• Paper
• "Scalable Nearest Neighbor Algorithms for High Dimensional Data" - Marius
Muja, 2014 – implemented in the FLANN library
31. Lucene Implementation Details
• Pre-train a k-means tree on a representative subset of the index
• Indexing:
• Convert all nodes from tree into unique tokens
• For each vector, find the closest matching leaf node
• Index vector with tokens for that leaf node, and all parent nodes
• Querying
• Find top n matching nodes from tree
• Convert nodes into a query, boosted by similarity to query vector
• 'q': 'clusters:(“121”^0.9 “909”^0.88 ”523”^0.91)’
• Create a re-rank query to brute force re-rank the top matching documents
• 'rq’: '{!rerank reRankQuery=$rqq reRankDocs=1000 reRankWeight=99}’
• 'rqq': '{!payloadEdismax v=$vq}’
• ‘vq’: vector:(”0”^-0.0136 ”1”^0.05387 ”2”^0.070476 ”3”^0.14529 …)
• Uses a special payload query parser (payload_score is insufficient)
• See https://github.com/DiceTechJobs/VectorsInSearch
• *Better approach – use doc values field or Lucene dimensional points
• Trade speed for accuracy depending on depth of tree search, and how many vectors are re-ranked
• Tree nodes follow a Zipfian distribution
32. Lucene Implementation Details
• Cluster Field – stores cluster tokens
• Turn off all norms, tf and idf weighting, custom hamming similarity class
• Vector Field – stores vectors for re-ranking
• Stores components plus payloads, custom similarity class using payloads
• Similarity classes: https://github.com/DiceTechJobs/SolrPlugins
34. Other Heuristic Methods
• Randomized KD Forest
• Constructs a number of KD trees choosing axis to split on randomly
• Searches all trees in parallel to a fixed number of leaf nodes
• KD Trees are very deep
• How to implement efficiently in an inverted index?
• Hierarchical Navigable Small World Graphs
• Hierarchical graph based model
• Paper - https://arxiv.org/pdf/1603.09320.pdf
• Consistently out-performs other ANNs methods on the ANNs
benchmarks page
• See - http://ann-benchmarks.com/
35. Distribution of
Vector Components
• Distribution of
components from our
vectors is Gaussian
• Mean is 0
• This means that most
vector components are
very small
• These components will
have minimal impact on
cosine score
Histogram of components taken from 350k vectors
Mean = 0.0
36. Vector Thresholding with Tokenization
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
[“04i+0.6”, “07i-0.5”]
• Round weight to lower precision
• Encode position and weight as a single
token
• Paper: “Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines”
37. Vector Thresholding with Payloads
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
• I modified the previous idea, using payload score queries
Q=vector:(”3”^-0.0136 ”14”^0.05387 ”56”^-0.070476
”71”^0.14529 …)&defType=payloadEdismax
• Indexing: Store remaining (non zero) tokens in index with payloads
• Querying: Uses custom payload query parser + similarity class
• See Github repo, and solr config in Kmeans tree section
38. Performance Comparison - Initial Results
• Hardware - Mac Book Pro, 2.6Ghz i7 CPU, 16G Ram, SSD
• Search Engine:
• Solr 7.5, single shard
• Index: 700k documents
• 1000 sample vector queries, requests were single threaded
• Metric – precision @10 compared to brute force
• Updated results – check https://github.com/DiceTechJobs/VectorsInSearch
39. Performance Comparison - Initial Results
• Each algorithm was ran over a range of different parameter values, to show recall – speed trade off
40. Performance Comparison - Initial Results
Algorithm Precision@10 Queries Per Sec
(Mean Qry Time)
LSH (Hamming Similarity) 0.69 1.3 qps (757 ms)
Kmeans Tree (trained on index) 0.88 9.2 qps (170 ms)
Kmean Tree (trained on sample) 0.85 9.5 qps (105 ms)
Vector Thresholding with Tokenization
(top 40% of components)
0.85 3.5 qps (312 ms)
Vector Threshold with Payloads
(top 40% of components)
0.94 1.8 qps (547 ms)
41. The Ultimate Solution - Sparse Coding?
• Also called ‘Dictionary Learning’
• Learns a sparse ‘overcomplete’ representation of a vector
• Example Algorithms:
• Sparse Auto-Encoder
• K-SVD
• Encoding needs to preserve the Metric Space
• Similar items need to remain similar after encoding
Other Relevant Approaches
• Word2bits - learns binary quantized word vectors
• https://github.com/agnusmaximus/Word2Bits
Metrics – recall often used for measuring synonymy and related problems, while precision and traditional IR metrics are better at measuring the efficacy at disambiguating a user’s intent
Context – bag of words
Global - learn semantic representations of terms
Address synonymy (word level)
Learn colocations (phrases)
Local – can be used to disambiguate ambiguous terms
Address polysemy
By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons.
For LSA illustration, and an excellent explanation, see here - http://iv.slis.indiana.edu/sw/lsa.html
Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polsemy as the same vector is used for a word regardless of it’s context.
Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation.
Context – bag of words
Global - learn semantic representations of terms
Address synonymy (word level)
Learn colocations (phrases)
Local – can be used to disambiguate ambiguous terms
Address polysemy
Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polsemy as the same vector is used for a word regardless of it’s context.
Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation.
Context – bag of words
Global - learn semantic representations of terms
Address synonymy (word level)
Learn colocations (phrases)
Local – can be used to disambiguate ambiguous terms
Address polysemy
How do we represent dense vectors in a form that works inside an inverted index?
Dense
Note – important to do colocation (phrase detection) before building an embedding model.
Embeddings work better when phrases are passed as single tokens.
Excellent explanation of the simHash- Dan Durme and Lall presentation, slide 15