SlideShare uma empresa Scribd logo
1 de 42
Vectors in Search
– Towards More Semantic Matching
Simon Hughes
Chief Data Scientist, Dice.com
@hughes_meister
#Activate18 #ActivateSearch
Who Am I?
• Chief Data Scientist at DHI (owns Dice.com)
• Key Projects:
• Search and Match
• Dice Recommender Systems
• Dice Job Search
• Dice Talent Search 3.0 and 4.0
• Dice Skill Center
• Dice Career Advisory Pages
• Dice Salary Predictor
• Dice Career Paths
• PhD Candidate DePaul University
• Subject Area – Machine Learning and NLP
• Thesis – Extracting Causal Relations from Scientific Essays
• Contact Info:
• Email: simon.hughes@dhigroupinc.com
• Twitter: https://twitter.com/hughes_meister
Motivation
• Dice.com - leading US technology professional job board
• Jobs marketplace
• We connect technology talent with employers
• High quality searching and matching are critical to our value
proposition, for both our customers and our clients
• Need – high quality content-based recommender engine
• Automatically determine how well a job seeker matches a particular position,
and vice versa
• Requirements:
• A semantic matching engine – goes beyond keyword search, to extracting
semantic information from job postings and resume
• Deployed at scale using existing search infrastructure (Solr and
ElasticSearch)
• Github Repository for Talk:
• https://github.com/DiceTechJobs/VectorsInSearch
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
Understanding Textual Data
Key Challenges:
• Synonymy – Multiple Words with the Same Meaning
• Related – typos, miss-spellings, acronyms, metonyms
• E.g. QA, Quality Assurance, Tester
• Polysemy – Ambiguity, a word has multiple meanings
• E.g. Bank, Book, Ape
• Hypernyms/Hyponyms – ‘type of’ relationships
• E.g. a dog (hyponym) is a type of animal (hypernym)
• Meronyms/Holonyms – ‘part of’ relationships
• E.g. finger (meronym) is a ‘part of’ a hand (holonym)
• What Words / Phrases are More Important?
• Named Entity Extraction (NER), Controlled Vocabularies
• Colocation (phrases) detection – e.g. “data scientist” vs “scientist who works with data”
• Stop words
• Term weighting schemes - e.g. tf.idf
How to Solve these Problems?
• Map documents and queries to a semantic space
• “From Strings to Things”?
• Google KG marketing
• Map words into concepts / semantics
• From strings to concepts
• How to represent?
Java
Technologies
Big Data
Tools
Javascript
Frameworks
Representations
Java
• Local representation
• Non distributed
• Sparse
• E.g. one-hot-vector
• Similar items have different representations
Representations
• Distributed Representation
• Dense vector
• Components of the vector represent learned concepts / latent variables
• Similar items have similar representations
• Most existing approaches produce dense vectors
Java
Java
• Local representation
• Non distributed
• Sparse
• E.g. one-hot-vector
• One vector component per unique word
• Similar items have different representations
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
The Importance of Context
How do we learn the meaning (semantics) of words?
• Distributional Hypothesis
• Words occurring in similar contexts have similar meanings
• Harris 1954
• “a word is characterized by the company it keeps”
• Firth 1957
• Ignores word order, grammar and syntax
• Latent Relation Hypothesis
• Pairs of words occurring in similar patterns have similar semantic relations
• Turney et al, 2003
• Patterns – X cuts Y, X works with Y, etc
• Word order and grammatical relations matter
• Further reading - Distributional approaches to word meanings
Learning Meaning from Context
Bag of Words Approaches – ignore word order
• Latent Models
• Context - Documents
• LSA
• LDA
• Semantic Vector Space Model
• Word Embeddings
• Context – word window
• Word2vec
• Glove
• Simple linear language models
• History - http://blog.aylien.com/a-review-of-the-recent-history-of-natural-language-processing/
• For document embeddings
• Average or idf weighted average of word vectors
• Sentence / Document Embeddings
• Context – document + word window
• E.g. Doc2vec
• Context – surrounding sentences
• E.g. skip-thought vectors
Word2Vec
• By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ], from Wikimedia Commons
Limitations of BOW Approaches:
• Shallow representation
• Word embeddings – limited to the word level
• Latent models – document level but doesn’t encode relational
information
• Synonymy - learn relatedness, not true synonyms
• E.g. Antonyms have similar vectors
• Polysemy – cannot encode different meanings of same word
• Global model not a local model
Beyond BOW - Deep Language Models
• Deep Language Model Embeddings
• Derived from the internal state of a deep LM
• Learns deep representation of sequences of words in context
• Can adjust word vectors based on their current context
• “NLP’s imagenet moment”
• Achieved state of the art results on many NLP tasks
• Consistently out-perform word embedding models
• Example models - ELMO, ULMFit, OpenAI Transformer
• Used for encoding sentences not whole documents
• Hard to scale
Deep Language Models
p(w1,w2,w3, w4,…,wn) = p(wn|w1,w2,…,wn-1)
…..
…..
…..
p(w1) p(w2|W1) p(w3|w1,w2) p(w4|w1,w2,w3)
Begin w1 w2 w3
LSTM LSTM LSTM LSTM
Embedding Models for Search
• Word Embedding Approaches
• Cluster Word Embeddings
• “Representing Documents and Queries as Sets of Word Embedded Vectors for
Information Retrieval”
• Clustered word2vec vectors using k-means
• Documents represented as clusters of word vectors
• Query - map query vectors as similarity to cluster centroids
• Out performed Jelinek Mercer LM similarity using VSM
• Average Word Embeddings
• From Chapter 5 of Deep Learning for Search
• Author - Tommaso Teofili
• Query and document represented as average of word2vec vectors
• Computing a weighted average using idf worked best
• Outperformed BM25 using cosine similarity
• BM25 + word2vec – highest NDCG score
Embedding Models for Search
• Dual Embedding Space Model (DESM)
• Research from Microsoft
• Extends word2vec
• Learns a dual embedding for queries and documents
• Paper - https://arxiv.org/pdf/1602.01137.pdf
• Evaluation
• Compared BM25, LSA and DESM on Bing Query Log Data
• Metrics - NDCG@1, NDCG@3, NDCG@5
• Results
• LSA and DESM both out-performed BM25
• DESM out-performed LSA
• DESM + BM25 out-performed all other approaches
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
Vectors in Search
• Dense Embedding Vector:
• Dense
• D dimensional
• D = 50-1000
• Inverted index:
• Sparse
• Pivoted by term
• V = Vocabulary
• |V| =100k+
• Fast because sparse
[+0.12, -0.34, -0.12, +0.27, +0.63]
Term Posting List
Java 1,5,100,102
.NET 2,4,600,605,1000
C# 2,88,105,800
SQL 130,433,648,899,1200
Html 1,2,10,30,55,202,252,30,598,
Searching with Word Embeddings
Approaches for using word embeddings:
• Top N terms
• Expand query using top n terms from model
• Boost expansions by cosine similarity
• Can use as a boost query, a re-rank query or a straight term expansion
• Q = “java developer”^10
OR ”java j2ee developer”^0.91 OR “java architect”^0.89
OR “lead java developer”^0.87 OR “j2ee developer”^0.86
OR “java engineer”^0.86
• Term Clustering
• Cluster embeddings using a clustering algorithm
• E.g. k-means
• Compute different sized clusters, k=100,1000,10000
• Map clusters to tokens and index
• Different fields for each k
• Larger k fields – bigger boost or rely on idf scoring
• Query expands to top clusters, boosted by similarity
• Q = “java developer”^10
OR cluster_k1000:5894^5
OR cluster_k100:23^2.5
OR cluster_k10:8^1.25
• See https://github.com/DiceTechJobs/ConceptualSearch
Searching Vectors – k-NN Search
• K-NN search
• Find the k closest neighbors to query vector according to similarity metric
• Usually cosine similarity or Euclidean distance
• Definitions
• D = number of components in the vector
• N = number of documents
• Brute Force Search:
• O(ND) = linear
• What if N AND/OR D is(are) very large?
• Vs. Inverted Index
• Sublinear
• Makes uses of sparsity of terms
• BTree or Distributed Hash Table lookup for terms, iterate posting list, re-rank matches
- O(n log n)
What is the Optimal Representation for a
Vector in an Inverted Index?
What properties would such a representation have?
• For Performance
• Sparse representation necessary to leverage inverted index
• For Relevancy
• Distributed representation
• Each document should be a collection of tokens
• Tokens represent some semantic feature of the space
• Similarity is preserved
• Similar vectors must also be similar under this new representation
• Zipfian distribution of tokens
• “We need a Zipfian Distribution” – John Berryman (Co-author of ‘Relevant
Search’)
• Tokenizing Embedding Spaces
Zipf’s Law
• The frequency of terms in
a corpus follow a power
law distribution
• Small number of tokens
are very common - filter
out irrelevant docs
• A large number of tokens
are very rare -
discriminate between
similar matches
• Distribution of last names - By Thekohser [CC BY-SA 3.0
(https://creativecommons.org/licenses/by-sa/3.0 )], from Wikimedia Commons
Approximate Nearest Neighbor Search
• Faster than full k-NN, with some loss in accuracy
• Approaches can be either:
• Data Dependent
• Learns and adjusts from the data
• Makes indexing new documents hard
• Data Independent
• Some Approaches:
• KD Tree
• LSH
• Heuristic Methods
• K-Means Tree
• Randomized KD Forest
• Paper: https://arxiv.org/abs/1603.09596
• HNSW (Hierarchical Navigable Small World Graphs – Top on http://ann-benchmarks.com/
• Paper: https://arxiv.org/pdf/1603.09320.pdf
• Vector Thresholding
• Choice of similarity metric is important in choosing an algorithm
KD Trees
• Construction
• Constructs a binary search tree by partitioning the search space along each vector dimension using the
dimensions
• Partitions are chosen orthogonal to each dimension
• Usually the median
• Querying
• Described here - https://en.wikipedia.org/wiki/K-d_tree#Complexity
• Limitations
• How to implement efficiently in an inverted index?
• Lucene 6.0 dimensional points
• See also - https://www.elastic.co/blog/lucene-points-6.0
• Not exposed in Solr and Elastic Search AFAIK
• Tree needs rebalancing on each insertion
• Curse of dimensionality
• N >> 2d
• For N points and D dimensions
• Complexity essentially linear for real world vectors (D>= 50)
• Approximate KNN Search
• Possible with KD tree – limit the number of searched nodes
• Typically out-performed by other ANNs approaches
Locality Sensitive Hashing
• LSH hashes items to discrete buckets
• More buckets – slower but more accurate
• Locality Preserving
• Maximizes the probability that similar items occupy the same buckets
• Random Projection LSH (sim Hash)
• LSH variant for cosine similarity
• Generate a random d-dimensional unit vector r, and for each vector v
• ℎ𝑎𝑠ℎ 𝑣 = 𝑠𝑖𝑔𝑛(𝑣. 𝑟)
• Produces a binary encoding, one bit for each hash function (random vector)
• Probability 2 vectors’ hashes match - proportional to cosine similarity
• Output of hash function can be indexed and searched using Hamming Distance
• Intuition - Van Durme and Lall -
http://www.cs.jhu.edu/~vandurme/papers/VanDurmeLallACL10-slides.pdf
• Data independent, although data dependent variations exist
• However, for real data, it is typically out-performed by heuristic methods
like k-means trees, and randomized KD-trees
• https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf
Encoding LSH Hash into the Index
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1]
• Hash into Bits
[“10110110100101”]
• Store hash fingerprint as a single token
["00_1","01_0","02_1","04_1","04_0","05_1","06_1","07_0","08_1","09_0","10_0","11_1","12_0","13
_1”]
• Store each bit as a token using it’s position and value
• Use mm parameter to speed up search
• Or store shingles of the binary tokens
• This is not sparse!
OR
Hamming Similarity
Class
• Custom similarity class
• Computes the number of
matching tokens
K-Means Tree
• Hierarchical Clustering Algorithm
• Recursively partitions vector space using k-Means clustering
• Fast - k-means runs in linear time using Lloyd’s heuristic
• Most other clustering algorithms run in quadratic time or worse
• Tree Construction
• For some branching factor b create b clusters
• Create b nodes, store centroid for each node
• For each new cluster, cluster its members into b smaller clusters
• These form child nodes of their parent clusters, forming a tree structure
• Continue until < b members per cluster
• Paper
• "Scalable Nearest Neighbor Algorithms for High Dimensional Data" - Marius
Muja, 2014 – implemented in the FLANN library
K-Means Tree
Second Layer
(Leaf Nodes)
Root Node
First Layer
…. ….…..
….
Documents
• Depth 3 K-Means Tree
Lucene Implementation Details
• Pre-train a k-means tree on a representative subset of the index
• Indexing:
• Convert all nodes from tree into unique tokens
• For each vector, find the closest matching leaf node
• Index vector with tokens for that leaf node, and all parent nodes
• Querying
• Find top n matching nodes from tree
• Convert nodes into a query, boosted by similarity to query vector
• 'q': 'clusters:(“121”^0.9 “909”^0.88 ”523”^0.91)’
• Create a re-rank query to brute force re-rank the top matching documents
• 'rq’: '{!rerank reRankQuery=$rqq reRankDocs=1000 reRankWeight=99}’
• 'rqq': '{!payloadEdismax v=$vq}’
• ‘vq’: vector:(”0”^-0.0136 ”1”^0.05387 ”2”^0.070476 ”3”^0.14529 …)
• Uses a special payload query parser (payload_score is insufficient)
• See https://github.com/DiceTechJobs/VectorsInSearch
• *Better approach – use doc values field or Lucene dimensional points
• Trade speed for accuracy depending on depth of tree search, and how many vectors are re-ranked
• Tree nodes follow a Zipfian distribution
Lucene Implementation Details
• Cluster Field – stores cluster tokens
• Turn off all norms, tf and idf weighting, custom hamming similarity class
• Vector Field – stores vectors for re-ranking
• Stores components plus payloads, custom similarity class using payloads
• Similarity classes: https://github.com/DiceTechJobs/SolrPlugins
Lucene Implementation Details
Vector field analysis chain Cluster fields
Other Heuristic Methods
• Randomized KD Forest
• Constructs a number of KD trees choosing axis to split on randomly
• Searches all trees in parallel to a fixed number of leaf nodes
• KD Trees are very deep
• How to implement efficiently in an inverted index?
• Hierarchical Navigable Small World Graphs
• Hierarchical graph based model
• Paper - https://arxiv.org/pdf/1603.09320.pdf
• Consistently out-performs other ANNs methods on the ANNs
benchmarks page
• See - http://ann-benchmarks.com/
Distribution of
Vector Components
• Distribution of
components from our
vectors is Gaussian
• Mean is 0
• This means that most
vector components are
very small
• These components will
have minimal impact on
cosine score
Histogram of components taken from 350k vectors
Mean = 0.0
Vector Thresholding with Tokenization
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
[“04i+0.6”, “07i-0.5”]
• Round weight to lower precision
• Encode position and weight as a single
token
• Paper: “Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines”
Vector Thresholding with Payloads
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
• I modified the previous idea, using payload score queries
Q=vector:(”3”^-0.0136 ”14”^0.05387 ”56”^-0.070476
”71”^0.14529 …)&defType=payloadEdismax
• Indexing: Store remaining (non zero) tokens in index with payloads
• Querying: Uses custom payload query parser + similarity class
• See Github repo, and solr config in Kmeans tree section
Performance Comparison - Initial Results
• Hardware - Mac Book Pro, 2.6Ghz i7 CPU, 16G Ram, SSD
• Search Engine:
• Solr 7.5, single shard
• Index: 700k documents
• 1000 sample vector queries, requests were single threaded
• Metric – precision @10 compared to brute force
• Updated results – check https://github.com/DiceTechJobs/VectorsInSearch
Performance Comparison - Initial Results
• Each algorithm was ran over a range of different parameter values, to show recall – speed trade off
Performance Comparison - Initial Results
Algorithm Precision@10 Queries Per Sec
(Mean Qry Time)
LSH (Hamming Similarity) 0.69 1.3 qps (757 ms)
Kmeans Tree (trained on index) 0.88 9.2 qps (170 ms)
Kmean Tree (trained on sample) 0.85 9.5 qps (105 ms)
Vector Thresholding with Tokenization
(top 40% of components)
0.85 3.5 qps (312 ms)
Vector Threshold with Payloads
(top 40% of components)
0.94 1.8 qps (547 ms)
The Ultimate Solution - Sparse Coding?
• Also called ‘Dictionary Learning’
• Learns a sparse ‘overcomplete’ representation of a vector
• Example Algorithms:
• Sparse Auto-Encoder
• K-SVD
• Encoding needs to preserve the Metric Space
• Similar items need to remain similar after encoding
Other Relevant Approaches
• Word2bits - learns binary quantized word vectors
• https://github.com/agnusmaximus/Word2Bits
Thank you!
Github Repository:
https://github.com/DiceTechJobs/VectorsInSearch
Simon Hughes
Chief Data Scientist, Dice.com
@hughes_meister
#Activate18 #ActivateSearch

Mais conteúdo relacionado

Mais procurados

Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningLucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningJoaquin Delgado PhD.
 
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Lucidworks
 
Webinar: Simpler Semantic Search with Solr
Webinar: Simpler Semantic Search with SolrWebinar: Simpler Semantic Search with Solr
Webinar: Simpler Semantic Search with SolrLucidworks
 
Developing A Big Data Search Engine - Where we have gone. Where we are going:...
Developing A Big Data Search Engine - Where we have gone. Where we are going:...Developing A Big Data Search Engine - Where we have gone. Where we are going:...
Developing A Big Data Search Engine - Where we have gone. Where we are going:...Lucidworks
 
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...Lucidworks
 
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...Lucidworks
 
Evolving the Optimal Relevancy Ranking Model at Dice.com
Evolving the Optimal Relevancy Ranking Model at Dice.comEvolving the Optimal Relevancy Ranking Model at Dice.com
Evolving the Optimal Relevancy Ranking Model at Dice.comSimon Hughes
 
Rank by time or by relevance - Revisiting Email Search
Rank by time or by relevance - Revisiting Email SearchRank by time or by relevance - Revisiting Email Search
Rank by time or by relevance - Revisiting Email SearchDavid Carmel
 
Crowdsourced query augmentation through the semantic discovery of domain spec...
Crowdsourced query augmentation through the semantic discovery of domain spec...Crowdsourced query augmentation through the semantic discovery of domain spec...
Crowdsourced query augmentation through the semantic discovery of domain spec...Trey Grainger
 
Extending Solr: Building a Cloud-like Knowledge Discovery Platform
Extending Solr: Building a Cloud-like Knowledge Discovery PlatformExtending Solr: Building a Cloud-like Knowledge Discovery Platform
Extending Solr: Building a Cloud-like Knowledge Discovery PlatformTrey Grainger
 
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...Lucidworks
 
Enhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchEnhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchTrey Grainger
 
Semantic & Multilingual Strategies in Lucene/Solr
Semantic & Multilingual Strategies in Lucene/SolrSemantic & Multilingual Strategies in Lucene/Solr
Semantic & Multilingual Strategies in Lucene/SolrTrey Grainger
 
Natural language processing and search
Natural language processing and searchNatural language processing and search
Natural language processing and searchNathan McMinn
 
Building a Real-time Solr-powered Recommendation Engine
Building a Real-time Solr-powered Recommendation EngineBuilding a Real-time Solr-powered Recommendation Engine
Building a Real-time Solr-powered Recommendation Enginelucenerevolution
 
Natural Language Processing with Graph Databases and Neo4j
Natural Language Processing with Graph Databases and Neo4jNatural Language Processing with Graph Databases and Neo4j
Natural Language Processing with Graph Databases and Neo4jWilliam Lyon
 
Building a real time, solr-powered recommendation engine
Building a real time, solr-powered recommendation engineBuilding a real time, solr-powered recommendation engine
Building a real time, solr-powered recommendation engineTrey Grainger
 
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systemsQi He
 
Presentation of Domain Specific Question Answering System Using N-gram Approach.
Presentation of Domain Specific Question Answering System Using N-gram Approach.Presentation of Domain Specific Question Answering System Using N-gram Approach.
Presentation of Domain Specific Question Answering System Using N-gram Approach.Tasnim Ara Islam
 
Reflected intelligence evolving self-learning data systems
Reflected intelligence  evolving self-learning data systemsReflected intelligence  evolving self-learning data systems
Reflected intelligence evolving self-learning data systemsTrey Grainger
 

Mais procurados (20)

Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningLucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
 
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
 
Webinar: Simpler Semantic Search with Solr
Webinar: Simpler Semantic Search with SolrWebinar: Simpler Semantic Search with Solr
Webinar: Simpler Semantic Search with Solr
 
Developing A Big Data Search Engine - Where we have gone. Where we are going:...
Developing A Big Data Search Engine - Where we have gone. Where we are going:...Developing A Big Data Search Engine - Where we have gone. Where we are going:...
Developing A Big Data Search Engine - Where we have gone. Where we are going:...
 
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...
Leveraging Lucene/Solr as a Knowledge Graph and Intent Engine: Presented by T...
 
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...
Searching and Querying Knowledge Graphs with Solr/SIREn - A Reference Archite...
 
Evolving the Optimal Relevancy Ranking Model at Dice.com
Evolving the Optimal Relevancy Ranking Model at Dice.comEvolving the Optimal Relevancy Ranking Model at Dice.com
Evolving the Optimal Relevancy Ranking Model at Dice.com
 
Rank by time or by relevance - Revisiting Email Search
Rank by time or by relevance - Revisiting Email SearchRank by time or by relevance - Revisiting Email Search
Rank by time or by relevance - Revisiting Email Search
 
Crowdsourced query augmentation through the semantic discovery of domain spec...
Crowdsourced query augmentation through the semantic discovery of domain spec...Crowdsourced query augmentation through the semantic discovery of domain spec...
Crowdsourced query augmentation through the semantic discovery of domain spec...
 
Extending Solr: Building a Cloud-like Knowledge Discovery Platform
Extending Solr: Building a Cloud-like Knowledge Discovery PlatformExtending Solr: Building a Cloud-like Knowledge Discovery Platform
Extending Solr: Building a Cloud-like Knowledge Discovery Platform
 
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...
Reflected Intelligence - Lucene/Solr as a self-learning data system: Presente...
 
Enhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchEnhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic search
 
Semantic & Multilingual Strategies in Lucene/Solr
Semantic & Multilingual Strategies in Lucene/SolrSemantic & Multilingual Strategies in Lucene/Solr
Semantic & Multilingual Strategies in Lucene/Solr
 
Natural language processing and search
Natural language processing and searchNatural language processing and search
Natural language processing and search
 
Building a Real-time Solr-powered Recommendation Engine
Building a Real-time Solr-powered Recommendation EngineBuilding a Real-time Solr-powered Recommendation Engine
Building a Real-time Solr-powered Recommendation Engine
 
Natural Language Processing with Graph Databases and Neo4j
Natural Language Processing with Graph Databases and Neo4jNatural Language Processing with Graph Databases and Neo4j
Natural Language Processing with Graph Databases and Neo4j
 
Building a real time, solr-powered recommendation engine
Building a real time, solr-powered recommendation engineBuilding a real time, solr-powered recommendation engine
Building a real time, solr-powered recommendation engine
 
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems
[AAAI 2019 tutorial] End-to-end goal-oriented question answering systems
 
Presentation of Domain Specific Question Answering System Using N-gram Approach.
Presentation of Domain Specific Question Answering System Using N-gram Approach.Presentation of Domain Specific Question Answering System Using N-gram Approach.
Presentation of Domain Specific Question Answering System Using N-gram Approach.
 
Reflected intelligence evolving self-learning data systems
Reflected intelligence  evolving self-learning data systemsReflected intelligence  evolving self-learning data systems
Reflected intelligence evolving self-learning data systems
 

Semelhante a Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com

Haystack 2019 - Search with Vectors - Simon Hughes
Haystack 2019 - Search with Vectors - Simon HughesHaystack 2019 - Search with Vectors - Simon Hughes
Haystack 2019 - Search with Vectors - Simon HughesOpenSource Connections
 
Improving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingDataWorks Summit
 
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...Joaquin Delgado PhD.
 
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...S. Diana Hu
 
An introduction to Metadata Application Profiles
An introduction to Metadata Application ProfilesAn introduction to Metadata Application Profiles
An introduction to Metadata Application Profileskcoylenet
 
First Steps in Semantic Data Modelling and Search & Analytics in the Cloud
First Steps in Semantic Data Modelling and Search & Analytics in the CloudFirst Steps in Semantic Data Modelling and Search & Analytics in the Cloud
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
 
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comEnhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comSimon Hughes
 
State of Search 2017 - Semantics and Science - Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna GautamState of Search 2017 - Semantics and Science - Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna GautamUpasna Gautam
 
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Parang Saraf
 
Why do they call it Linked Data when they want to say...?
Why do they call it Linked Data when they want to say...?Why do they call it Linked Data when they want to say...?
Why do they call it Linked Data when they want to say...?Oscar Corcho
 
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AI
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AIRed Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AI
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AILuigi Fugaro
 
Neo4j Training Introduction
Neo4j Training IntroductionNeo4j Training Introduction
Neo4j Training IntroductionMax De Marzi
 
Knowledge engineering and the Web
Knowledge engineering and the WebKnowledge engineering and the Web
Knowledge engineering and the WebGuus Schreiber
 
Tiers of Abstraction and Audience in Cultural Heritage Data Modeling
Tiers of Abstraction and Audience in Cultural Heritage Data ModelingTiers of Abstraction and Audience in Cultural Heritage Data Modeling
Tiers of Abstraction and Audience in Cultural Heritage Data ModelingRobert Sanderson
 
Intro to the semantic web (for libraries)
Intro to the semantic web (for libraries) Intro to the semantic web (for libraries)
Intro to the semantic web (for libraries) robin fay
 
Interface for Finding Close Matches from Translation Memory
Interface for Finding Close Matches from Translation MemoryInterface for Finding Close Matches from Translation Memory
Interface for Finding Close Matches from Translation MemoryPriyatham Bollimpalli
 

Semelhante a Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com (20)

Haystack 2019 - Search with Vectors - Simon Hughes
Haystack 2019 - Search with Vectors - Simon HughesHaystack 2019 - Search with Vectors - Simon Hughes
Haystack 2019 - Search with Vectors - Simon Hughes
 
Improving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language Processing
 
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
 
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 
An introduction to Metadata Application Profiles
An introduction to Metadata Application ProfilesAn introduction to Metadata Application Profiles
An introduction to Metadata Application Profiles
 
Metadata
MetadataMetadata
Metadata
 
First Steps in Semantic Data Modelling and Search & Analytics in the Cloud
First Steps in Semantic Data Modelling and Search & Analytics in the CloudFirst Steps in Semantic Data Modelling and Search & Analytics in the Cloud
First Steps in Semantic Data Modelling and Search & Analytics in the Cloud
 
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comEnhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
 
NLP & DBpedia
 NLP & DBpedia NLP & DBpedia
NLP & DBpedia
 
State of Search 2017 - Semantics and Science - Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna GautamState of Search 2017 - Semantics and Science - Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna Gautam
 
Semantic web
Semantic webSemantic web
Semantic web
 
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
 
Why do they call it Linked Data when they want to say...?
Why do they call it Linked Data when they want to say...?Why do they call it Linked Data when they want to say...?
Why do they call it Linked Data when they want to say...?
 
Word 2 vector
Word 2 vectorWord 2 vector
Word 2 vector
 
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AI
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AIRed Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AI
Red Hat Summit Connect 2023 - Redis Enterprise, the engine of Generative AI
 
Neo4j Training Introduction
Neo4j Training IntroductionNeo4j Training Introduction
Neo4j Training Introduction
 
Knowledge engineering and the Web
Knowledge engineering and the WebKnowledge engineering and the Web
Knowledge engineering and the Web
 
Tiers of Abstraction and Audience in Cultural Heritage Data Modeling
Tiers of Abstraction and Audience in Cultural Heritage Data ModelingTiers of Abstraction and Audience in Cultural Heritage Data Modeling
Tiers of Abstraction and Audience in Cultural Heritage Data Modeling
 
Intro to the semantic web (for libraries)
Intro to the semantic web (for libraries) Intro to the semantic web (for libraries)
Intro to the semantic web (for libraries)
 
Interface for Finding Close Matches from Translation Memory
Interface for Finding Close Matches from Translation MemoryInterface for Finding Close Matches from Translation Memory
Interface for Finding Close Matches from Translation Memory
 

Mais de Lucidworks

Search is the Tip of the Spear for Your B2B eCommerce Strategy
Search is the Tip of the Spear for Your B2B eCommerce StrategySearch is the Tip of the Spear for Your B2B eCommerce Strategy
Search is the Tip of the Spear for Your B2B eCommerce StrategyLucidworks
 
Drive Agent Effectiveness in Salesforce
Drive Agent Effectiveness in SalesforceDrive Agent Effectiveness in Salesforce
Drive Agent Effectiveness in SalesforceLucidworks
 
How Crate & Barrel Connects Shoppers with Relevant Products
How Crate & Barrel Connects Shoppers with Relevant ProductsHow Crate & Barrel Connects Shoppers with Relevant Products
How Crate & Barrel Connects Shoppers with Relevant ProductsLucidworks
 
Lucidworks & IMRG Webinar – Best-In-Class Retail Product Discovery
Lucidworks & IMRG Webinar – Best-In-Class Retail Product DiscoveryLucidworks & IMRG Webinar – Best-In-Class Retail Product Discovery
Lucidworks & IMRG Webinar – Best-In-Class Retail Product DiscoveryLucidworks
 
Connected Experiences Are Personalized Experiences
Connected Experiences Are Personalized ExperiencesConnected Experiences Are Personalized Experiences
Connected Experiences Are Personalized ExperiencesLucidworks
 
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...Lucidworks
 
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...Lucidworks
 
Preparing for Peak in Ecommerce | eTail Asia 2020
Preparing for Peak in Ecommerce | eTail Asia 2020Preparing for Peak in Ecommerce | eTail Asia 2020
Preparing for Peak in Ecommerce | eTail Asia 2020Lucidworks
 
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...Lucidworks
 
AI-Powered Linguistics and Search with Fusion and Rosette
AI-Powered Linguistics and Search with Fusion and RosetteAI-Powered Linguistics and Search with Fusion and Rosette
AI-Powered Linguistics and Search with Fusion and RosetteLucidworks
 
The Service Industry After COVID-19: The Soul of Service in a Virtual Moment
The Service Industry After COVID-19: The Soul of Service in a Virtual MomentThe Service Industry After COVID-19: The Soul of Service in a Virtual Moment
The Service Industry After COVID-19: The Soul of Service in a Virtual MomentLucidworks
 
Webinar: Smart answers for employee and customer support after covid 19 - Europe
Webinar: Smart answers for employee and customer support after covid 19 - EuropeWebinar: Smart answers for employee and customer support after covid 19 - Europe
Webinar: Smart answers for employee and customer support after covid 19 - EuropeLucidworks
 
Smart Answers for Employee and Customer Support After COVID-19
Smart Answers for Employee and Customer Support After COVID-19Smart Answers for Employee and Customer Support After COVID-19
Smart Answers for Employee and Customer Support After COVID-19Lucidworks
 
Applying AI & Search in Europe - featuring 451 Research
Applying AI & Search in Europe - featuring 451 ResearchApplying AI & Search in Europe - featuring 451 Research
Applying AI & Search in Europe - featuring 451 ResearchLucidworks
 
Webinar: Accelerate Data Science with Fusion 5.1
Webinar: Accelerate Data Science with Fusion 5.1Webinar: Accelerate Data Science with Fusion 5.1
Webinar: Accelerate Data Science with Fusion 5.1Lucidworks
 
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce Strategy
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce StrategyWebinar: 5 Must-Have Items You Need for Your 2020 Ecommerce Strategy
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce StrategyLucidworks
 
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...Lucidworks
 
Apply Knowledge Graphs and Search for Real-World Decision Intelligence
Apply Knowledge Graphs and Search for Real-World Decision IntelligenceApply Knowledge Graphs and Search for Real-World Decision Intelligence
Apply Knowledge Graphs and Search for Real-World Decision IntelligenceLucidworks
 
Webinar: Building a Business Case for Enterprise Search
Webinar: Building a Business Case for Enterprise SearchWebinar: Building a Business Case for Enterprise Search
Webinar: Building a Business Case for Enterprise SearchLucidworks
 
Why Insight Engines Matter in 2020 and Beyond
Why Insight Engines Matter in 2020 and BeyondWhy Insight Engines Matter in 2020 and Beyond
Why Insight Engines Matter in 2020 and BeyondLucidworks
 

Mais de Lucidworks (20)

Search is the Tip of the Spear for Your B2B eCommerce Strategy
Search is the Tip of the Spear for Your B2B eCommerce StrategySearch is the Tip of the Spear for Your B2B eCommerce Strategy
Search is the Tip of the Spear for Your B2B eCommerce Strategy
 
Drive Agent Effectiveness in Salesforce
Drive Agent Effectiveness in SalesforceDrive Agent Effectiveness in Salesforce
Drive Agent Effectiveness in Salesforce
 
How Crate & Barrel Connects Shoppers with Relevant Products
How Crate & Barrel Connects Shoppers with Relevant ProductsHow Crate & Barrel Connects Shoppers with Relevant Products
How Crate & Barrel Connects Shoppers with Relevant Products
 
Lucidworks & IMRG Webinar – Best-In-Class Retail Product Discovery
Lucidworks & IMRG Webinar – Best-In-Class Retail Product DiscoveryLucidworks & IMRG Webinar – Best-In-Class Retail Product Discovery
Lucidworks & IMRG Webinar – Best-In-Class Retail Product Discovery
 
Connected Experiences Are Personalized Experiences
Connected Experiences Are Personalized ExperiencesConnected Experiences Are Personalized Experiences
Connected Experiences Are Personalized Experiences
 
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...
Intelligent Insight Driven Policing with MC+A, Toronto Police Service and Luc...
 
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...
[Webinar] Intelligent Policing. Leveraging Data to more effectively Serve Com...
 
Preparing for Peak in Ecommerce | eTail Asia 2020
Preparing for Peak in Ecommerce | eTail Asia 2020Preparing for Peak in Ecommerce | eTail Asia 2020
Preparing for Peak in Ecommerce | eTail Asia 2020
 
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...
Accelerate The Path To Purchase With Product Discovery at Retail Innovation C...
 
AI-Powered Linguistics and Search with Fusion and Rosette
AI-Powered Linguistics and Search with Fusion and RosetteAI-Powered Linguistics and Search with Fusion and Rosette
AI-Powered Linguistics and Search with Fusion and Rosette
 
The Service Industry After COVID-19: The Soul of Service in a Virtual Moment
The Service Industry After COVID-19: The Soul of Service in a Virtual MomentThe Service Industry After COVID-19: The Soul of Service in a Virtual Moment
The Service Industry After COVID-19: The Soul of Service in a Virtual Moment
 
Webinar: Smart answers for employee and customer support after covid 19 - Europe
Webinar: Smart answers for employee and customer support after covid 19 - EuropeWebinar: Smart answers for employee and customer support after covid 19 - Europe
Webinar: Smart answers for employee and customer support after covid 19 - Europe
 
Smart Answers for Employee and Customer Support After COVID-19
Smart Answers for Employee and Customer Support After COVID-19Smart Answers for Employee and Customer Support After COVID-19
Smart Answers for Employee and Customer Support After COVID-19
 
Applying AI & Search in Europe - featuring 451 Research
Applying AI & Search in Europe - featuring 451 ResearchApplying AI & Search in Europe - featuring 451 Research
Applying AI & Search in Europe - featuring 451 Research
 
Webinar: Accelerate Data Science with Fusion 5.1
Webinar: Accelerate Data Science with Fusion 5.1Webinar: Accelerate Data Science with Fusion 5.1
Webinar: Accelerate Data Science with Fusion 5.1
 
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce Strategy
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce StrategyWebinar: 5 Must-Have Items You Need for Your 2020 Ecommerce Strategy
Webinar: 5 Must-Have Items You Need for Your 2020 Ecommerce Strategy
 
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...
Where Search Meets Science and Style Meets Savings: Nordstrom Rack's Journey ...
 
Apply Knowledge Graphs and Search for Real-World Decision Intelligence
Apply Knowledge Graphs and Search for Real-World Decision IntelligenceApply Knowledge Graphs and Search for Real-World Decision Intelligence
Apply Knowledge Graphs and Search for Real-World Decision Intelligence
 
Webinar: Building a Business Case for Enterprise Search
Webinar: Building a Business Case for Enterprise SearchWebinar: Building a Business Case for Enterprise Search
Webinar: Building a Business Case for Enterprise Search
 
Why Insight Engines Matter in 2020 and Beyond
Why Insight Engines Matter in 2020 and BeyondWhy Insight Engines Matter in 2020 and Beyond
Why Insight Engines Matter in 2020 and Beyond
 

Último

Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024The Digital Insurer
 

Último (20)

Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 

Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com

  • 1. Vectors in Search – Towards More Semantic Matching Simon Hughes Chief Data Scientist, Dice.com @hughes_meister #Activate18 #ActivateSearch
  • 2. Who Am I? • Chief Data Scientist at DHI (owns Dice.com) • Key Projects: • Search and Match • Dice Recommender Systems • Dice Job Search • Dice Talent Search 3.0 and 4.0 • Dice Skill Center • Dice Career Advisory Pages • Dice Salary Predictor • Dice Career Paths • PhD Candidate DePaul University • Subject Area – Machine Learning and NLP • Thesis – Extracting Causal Relations from Scientific Essays • Contact Info: • Email: simon.hughes@dhigroupinc.com • Twitter: https://twitter.com/hughes_meister
  • 3. Motivation • Dice.com - leading US technology professional job board • Jobs marketplace • We connect technology talent with employers • High quality searching and matching are critical to our value proposition, for both our customers and our clients • Need – high quality content-based recommender engine • Automatically determine how well a job seeker matches a particular position, and vice versa • Requirements: • A semantic matching engine – goes beyond keyword search, to extracting semantic information from job postings and resume • Deployed at scale using existing search infrastructure (Solr and ElasticSearch) • Github Repository for Talk: • https://github.com/DiceTechJobs/VectorsInSearch
  • 4. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 5. Understanding Textual Data Key Challenges: • Synonymy – Multiple Words with the Same Meaning • Related – typos, miss-spellings, acronyms, metonyms • E.g. QA, Quality Assurance, Tester • Polysemy – Ambiguity, a word has multiple meanings • E.g. Bank, Book, Ape • Hypernyms/Hyponyms – ‘type of’ relationships • E.g. a dog (hyponym) is a type of animal (hypernym) • Meronyms/Holonyms – ‘part of’ relationships • E.g. finger (meronym) is a ‘part of’ a hand (holonym) • What Words / Phrases are More Important? • Named Entity Extraction (NER), Controlled Vocabularies • Colocation (phrases) detection – e.g. “data scientist” vs “scientist who works with data” • Stop words • Term weighting schemes - e.g. tf.idf
  • 6. How to Solve these Problems? • Map documents and queries to a semantic space • “From Strings to Things”? • Google KG marketing • Map words into concepts / semantics • From strings to concepts • How to represent? Java Technologies Big Data Tools Javascript Frameworks
  • 7. Representations Java • Local representation • Non distributed • Sparse • E.g. one-hot-vector • Similar items have different representations
  • 8. Representations • Distributed Representation • Dense vector • Components of the vector represent learned concepts / latent variables • Similar items have similar representations • Most existing approaches produce dense vectors Java Java • Local representation • Non distributed • Sparse • E.g. one-hot-vector • One vector component per unique word • Similar items have different representations
  • 9. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 10. The Importance of Context How do we learn the meaning (semantics) of words? • Distributional Hypothesis • Words occurring in similar contexts have similar meanings • Harris 1954 • “a word is characterized by the company it keeps” • Firth 1957 • Ignores word order, grammar and syntax • Latent Relation Hypothesis • Pairs of words occurring in similar patterns have similar semantic relations • Turney et al, 2003 • Patterns – X cuts Y, X works with Y, etc • Word order and grammatical relations matter • Further reading - Distributional approaches to word meanings
  • 11. Learning Meaning from Context Bag of Words Approaches – ignore word order • Latent Models • Context - Documents • LSA • LDA • Semantic Vector Space Model • Word Embeddings • Context – word window • Word2vec • Glove • Simple linear language models • History - http://blog.aylien.com/a-review-of-the-recent-history-of-natural-language-processing/ • For document embeddings • Average or idf weighted average of word vectors • Sentence / Document Embeddings • Context – document + word window • E.g. Doc2vec • Context – surrounding sentences • E.g. skip-thought vectors
  • 12. Word2Vec • By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ], from Wikimedia Commons
  • 13. Limitations of BOW Approaches: • Shallow representation • Word embeddings – limited to the word level • Latent models – document level but doesn’t encode relational information • Synonymy - learn relatedness, not true synonyms • E.g. Antonyms have similar vectors • Polysemy – cannot encode different meanings of same word • Global model not a local model
  • 14. Beyond BOW - Deep Language Models • Deep Language Model Embeddings • Derived from the internal state of a deep LM • Learns deep representation of sequences of words in context • Can adjust word vectors based on their current context • “NLP’s imagenet moment” • Achieved state of the art results on many NLP tasks • Consistently out-perform word embedding models • Example models - ELMO, ULMFit, OpenAI Transformer • Used for encoding sentences not whole documents • Hard to scale
  • 15. Deep Language Models p(w1,w2,w3, w4,…,wn) = p(wn|w1,w2,…,wn-1) ….. ….. ….. p(w1) p(w2|W1) p(w3|w1,w2) p(w4|w1,w2,w3) Begin w1 w2 w3 LSTM LSTM LSTM LSTM
  • 16. Embedding Models for Search • Word Embedding Approaches • Cluster Word Embeddings • “Representing Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval” • Clustered word2vec vectors using k-means • Documents represented as clusters of word vectors • Query - map query vectors as similarity to cluster centroids • Out performed Jelinek Mercer LM similarity using VSM • Average Word Embeddings • From Chapter 5 of Deep Learning for Search • Author - Tommaso Teofili • Query and document represented as average of word2vec vectors • Computing a weighted average using idf worked best • Outperformed BM25 using cosine similarity • BM25 + word2vec – highest NDCG score
  • 17. Embedding Models for Search • Dual Embedding Space Model (DESM) • Research from Microsoft • Extends word2vec • Learns a dual embedding for queries and documents • Paper - https://arxiv.org/pdf/1602.01137.pdf • Evaluation • Compared BM25, LSA and DESM on Bing Query Log Data • Metrics - NDCG@1, NDCG@3, NDCG@5 • Results • LSA and DESM both out-performed BM25 • DESM out-performed LSA • DESM + BM25 out-performed all other approaches
  • 18. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 19. Vectors in Search • Dense Embedding Vector: • Dense • D dimensional • D = 50-1000 • Inverted index: • Sparse • Pivoted by term • V = Vocabulary • |V| =100k+ • Fast because sparse [+0.12, -0.34, -0.12, +0.27, +0.63] Term Posting List Java 1,5,100,102 .NET 2,4,600,605,1000 C# 2,88,105,800 SQL 130,433,648,899,1200 Html 1,2,10,30,55,202,252,30,598,
  • 20. Searching with Word Embeddings Approaches for using word embeddings: • Top N terms • Expand query using top n terms from model • Boost expansions by cosine similarity • Can use as a boost query, a re-rank query or a straight term expansion • Q = “java developer”^10 OR ”java j2ee developer”^0.91 OR “java architect”^0.89 OR “lead java developer”^0.87 OR “j2ee developer”^0.86 OR “java engineer”^0.86 • Term Clustering • Cluster embeddings using a clustering algorithm • E.g. k-means • Compute different sized clusters, k=100,1000,10000 • Map clusters to tokens and index • Different fields for each k • Larger k fields – bigger boost or rely on idf scoring • Query expands to top clusters, boosted by similarity • Q = “java developer”^10 OR cluster_k1000:5894^5 OR cluster_k100:23^2.5 OR cluster_k10:8^1.25 • See https://github.com/DiceTechJobs/ConceptualSearch
  • 21. Searching Vectors – k-NN Search • K-NN search • Find the k closest neighbors to query vector according to similarity metric • Usually cosine similarity or Euclidean distance • Definitions • D = number of components in the vector • N = number of documents • Brute Force Search: • O(ND) = linear • What if N AND/OR D is(are) very large? • Vs. Inverted Index • Sublinear • Makes uses of sparsity of terms • BTree or Distributed Hash Table lookup for terms, iterate posting list, re-rank matches - O(n log n)
  • 22. What is the Optimal Representation for a Vector in an Inverted Index? What properties would such a representation have? • For Performance • Sparse representation necessary to leverage inverted index • For Relevancy • Distributed representation • Each document should be a collection of tokens • Tokens represent some semantic feature of the space • Similarity is preserved • Similar vectors must also be similar under this new representation • Zipfian distribution of tokens • “We need a Zipfian Distribution” – John Berryman (Co-author of ‘Relevant Search’) • Tokenizing Embedding Spaces
  • 23. Zipf’s Law • The frequency of terms in a corpus follow a power law distribution • Small number of tokens are very common - filter out irrelevant docs • A large number of tokens are very rare - discriminate between similar matches • Distribution of last names - By Thekohser [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0 )], from Wikimedia Commons
  • 24. Approximate Nearest Neighbor Search • Faster than full k-NN, with some loss in accuracy • Approaches can be either: • Data Dependent • Learns and adjusts from the data • Makes indexing new documents hard • Data Independent • Some Approaches: • KD Tree • LSH • Heuristic Methods • K-Means Tree • Randomized KD Forest • Paper: https://arxiv.org/abs/1603.09596 • HNSW (Hierarchical Navigable Small World Graphs – Top on http://ann-benchmarks.com/ • Paper: https://arxiv.org/pdf/1603.09320.pdf • Vector Thresholding • Choice of similarity metric is important in choosing an algorithm
  • 25. KD Trees • Construction • Constructs a binary search tree by partitioning the search space along each vector dimension using the dimensions • Partitions are chosen orthogonal to each dimension • Usually the median • Querying • Described here - https://en.wikipedia.org/wiki/K-d_tree#Complexity • Limitations • How to implement efficiently in an inverted index? • Lucene 6.0 dimensional points • See also - https://www.elastic.co/blog/lucene-points-6.0 • Not exposed in Solr and Elastic Search AFAIK • Tree needs rebalancing on each insertion • Curse of dimensionality • N >> 2d • For N points and D dimensions • Complexity essentially linear for real world vectors (D>= 50) • Approximate KNN Search • Possible with KD tree – limit the number of searched nodes • Typically out-performed by other ANNs approaches
  • 26. Locality Sensitive Hashing • LSH hashes items to discrete buckets • More buckets – slower but more accurate • Locality Preserving • Maximizes the probability that similar items occupy the same buckets • Random Projection LSH (sim Hash) • LSH variant for cosine similarity • Generate a random d-dimensional unit vector r, and for each vector v • ℎ𝑎𝑠ℎ 𝑣 = 𝑠𝑖𝑔𝑛(𝑣. 𝑟) • Produces a binary encoding, one bit for each hash function (random vector) • Probability 2 vectors’ hashes match - proportional to cosine similarity • Output of hash function can be indexed and searched using Hamming Distance • Intuition - Van Durme and Lall - http://www.cs.jhu.edu/~vandurme/papers/VanDurmeLallACL10-slides.pdf • Data independent, although data dependent variations exist • However, for real data, it is typically out-performed by heuristic methods like k-means trees, and randomized KD-trees • https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf
  • 27. Encoding LSH Hash into the Index [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1] • Hash into Bits [“10110110100101”] • Store hash fingerprint as a single token ["00_1","01_0","02_1","04_1","04_0","05_1","06_1","07_0","08_1","09_0","10_0","11_1","12_0","13 _1”] • Store each bit as a token using it’s position and value • Use mm parameter to speed up search • Or store shingles of the binary tokens • This is not sparse! OR
  • 28. Hamming Similarity Class • Custom similarity class • Computes the number of matching tokens
  • 29. K-Means Tree • Hierarchical Clustering Algorithm • Recursively partitions vector space using k-Means clustering • Fast - k-means runs in linear time using Lloyd’s heuristic • Most other clustering algorithms run in quadratic time or worse • Tree Construction • For some branching factor b create b clusters • Create b nodes, store centroid for each node • For each new cluster, cluster its members into b smaller clusters • These form child nodes of their parent clusters, forming a tree structure • Continue until < b members per cluster • Paper • "Scalable Nearest Neighbor Algorithms for High Dimensional Data" - Marius Muja, 2014 – implemented in the FLANN library
  • 30. K-Means Tree Second Layer (Leaf Nodes) Root Node First Layer …. ….….. …. Documents • Depth 3 K-Means Tree
  • 31. Lucene Implementation Details • Pre-train a k-means tree on a representative subset of the index • Indexing: • Convert all nodes from tree into unique tokens • For each vector, find the closest matching leaf node • Index vector with tokens for that leaf node, and all parent nodes • Querying • Find top n matching nodes from tree • Convert nodes into a query, boosted by similarity to query vector • 'q': 'clusters:(“121”^0.9 “909”^0.88 ”523”^0.91)’ • Create a re-rank query to brute force re-rank the top matching documents • 'rq’: '{!rerank reRankQuery=$rqq reRankDocs=1000 reRankWeight=99}’ • 'rqq': '{!payloadEdismax v=$vq}’ • ‘vq’: vector:(”0”^-0.0136 ”1”^0.05387 ”2”^0.070476 ”3”^0.14529 …) • Uses a special payload query parser (payload_score is insufficient) • See https://github.com/DiceTechJobs/VectorsInSearch • *Better approach – use doc values field or Lucene dimensional points • Trade speed for accuracy depending on depth of tree search, and how many vectors are re-ranked • Tree nodes follow a Zipfian distribution
  • 32. Lucene Implementation Details • Cluster Field – stores cluster tokens • Turn off all norms, tf and idf weighting, custom hamming similarity class • Vector Field – stores vectors for re-ranking • Stores components plus payloads, custom similarity class using payloads • Similarity classes: https://github.com/DiceTechJobs/SolrPlugins
  • 33. Lucene Implementation Details Vector field analysis chain Cluster fields
  • 34. Other Heuristic Methods • Randomized KD Forest • Constructs a number of KD trees choosing axis to split on randomly • Searches all trees in parallel to a fixed number of leaf nodes • KD Trees are very deep • How to implement efficiently in an inverted index? • Hierarchical Navigable Small World Graphs • Hierarchical graph based model • Paper - https://arxiv.org/pdf/1603.09320.pdf • Consistently out-performs other ANNs methods on the ANNs benchmarks page • See - http://ann-benchmarks.com/
  • 35. Distribution of Vector Components • Distribution of components from our vectors is Gaussian • Mean is 0 • This means that most vector components are very small • These components will have minimal impact on cosine score Histogram of components taken from 350k vectors Mean = 0.0
  • 36. Vector Thresholding with Tokenization [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [ 0, 0, 0, 0, +0.63, 0, 0, -0.48] • Drop all but the largest components [“04i+0.6”, “07i-0.5”] • Round weight to lower precision • Encode position and weight as a single token • Paper: “Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines”
  • 37. Vector Thresholding with Payloads [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [ 0, 0, 0, 0, +0.63, 0, 0, -0.48] • Drop all but the largest components • I modified the previous idea, using payload score queries Q=vector:(”3”^-0.0136 ”14”^0.05387 ”56”^-0.070476 ”71”^0.14529 …)&defType=payloadEdismax • Indexing: Store remaining (non zero) tokens in index with payloads • Querying: Uses custom payload query parser + similarity class • See Github repo, and solr config in Kmeans tree section
  • 38. Performance Comparison - Initial Results • Hardware - Mac Book Pro, 2.6Ghz i7 CPU, 16G Ram, SSD • Search Engine: • Solr 7.5, single shard • Index: 700k documents • 1000 sample vector queries, requests were single threaded • Metric – precision @10 compared to brute force • Updated results – check https://github.com/DiceTechJobs/VectorsInSearch
  • 39. Performance Comparison - Initial Results • Each algorithm was ran over a range of different parameter values, to show recall – speed trade off
  • 40. Performance Comparison - Initial Results Algorithm Precision@10 Queries Per Sec (Mean Qry Time) LSH (Hamming Similarity) 0.69 1.3 qps (757 ms) Kmeans Tree (trained on index) 0.88 9.2 qps (170 ms) Kmean Tree (trained on sample) 0.85 9.5 qps (105 ms) Vector Thresholding with Tokenization (top 40% of components) 0.85 3.5 qps (312 ms) Vector Threshold with Payloads (top 40% of components) 0.94 1.8 qps (547 ms)
  • 41. The Ultimate Solution - Sparse Coding? • Also called ‘Dictionary Learning’ • Learns a sparse ‘overcomplete’ representation of a vector • Example Algorithms: • Sparse Auto-Encoder • K-SVD • Encoding needs to preserve the Metric Space • Similar items need to remain similar after encoding Other Relevant Approaches • Word2bits - learns binary quantized word vectors • https://github.com/agnusmaximus/Word2Bits
  • 42. Thank you! Github Repository: https://github.com/DiceTechJobs/VectorsInSearch Simon Hughes Chief Data Scientist, Dice.com @hughes_meister #Activate18 #ActivateSearch

Notas do Editor

  1. Metrics – recall often used for measuring synonymy and related problems, while precision and traditional IR metrics are better at measuring the efficacy at disambiguating a user’s intent
  2. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  3. By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons. For LSA illustration, and an excellent explanation, see here - http://iv.slis.indiana.edu/sw/lsa.html
  4. Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polsemy as the same vector is used for a word regardless of it’s context. Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  5. Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polsemy as the same vector is used for a word regardless of it’s context. Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  6. How do we represent dense vectors in a form that works inside an inverted index? Dense
  7. Note – important to do colocation (phrase detection) before building an embedding model. Embeddings work better when phrases are passed as single tokens.
  8. Excellent explanation of the simHash- Dan Durme and Lall presentation, slide 15