SlideShare a Scribd company logo
1 of 65
Download to read offline
Badenes-Olmedo, Carlos
Corcho, Oscar
Ontology Engineering Group (OEG)
Universidad PolitƩcnica de Madrid (UPM)
Probabilistic
Topic Models
K-CAP 2017
Knowledge Capture

December 4th-6th, 2017

Austin, Texas, United States
cbadenes@fi.upm.es
@carbadol
github.com/librairy
oeg-upm.net
2
get ready to play ..
$	git	clone	git@github.com:librairy/tutorial.git	.	
$	docker	run	--name	test	-v	"$PWD":/src	librairy/compiler
3
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
4
Documents
5
Words
dataset
datum
knowledge ontology
query
example
page
eventconcept
class
extraction
section
science term
instance
measure
6
ontology 0.011696399528999426
query 0.00840189384297452
datum 0.007648923027408832
example 0.007628095111872131
knowledge 0.006779803572327723
page 0.006705717576295264
event 0.0066423608486780505
case 0.006638825784874849
dataset 0.006361954166489025
concept 0.005970016686794867
.. ..
class 0.0002888599713777317
6
extraction 0.0002747438325633771
6
datum 0.0002738706330554531
section 0.000268698920616526
example 0.0002680552015363467
6
event 0.0002648849674904949
dataset 0.0002602128572589013
5
case 0.0002595933993586539
result 0.0002585102532654629
entity 0.0002571243653838122
5
.. ..
Topics
dataset
event
7
ontology 0.01169639952899942
6query 0.00840189384297452
datum 0.00764892302740883
2example 0.00762809511187213
1knowledge 0.00677980357232772
3page 0.00670571757629526
4event 0.00664236084867805
05case 0.00663882578487484
9dataset 0.00636195416648902
5concept 0.00597001668679486
7.. ..
class 0.00028885997137773
176extraction 0.00027474383256337
716datum 0.00027387063305545
31section 0.00026869892061652
6example 0.00026805520153634
676event 0.00026488496749049
49dataset 0.00026021285725890
135case 0.00025959339935865
39result 0.00025851025326546
29entity 0.00025712436538381
225.. ..
Documents
dataset event
8
Representational Model
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.6, 0.3, 0.1]
[ 0.2, 0.2, 0.6]
[ 0.4, 0.3, 0.3]
[ T0, T1, T2]
dataset
event
term
datum
concept
knowledge
TOPICS
WORDS
DOCUMENTS
Topic
Model
9
Visualization
ā€¢ PMLA items from JSTOR's Data for Research service
ā€¢ restricted to items categorized as ā€œfull-length articlesā€ with more than 2000 words
ā€¢ 5605 articles out of the 9200 items from the years 1889ā€“2007
ā€¢ LDA, 64 topics
10
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
11
ā€¢ Each topic is a distribution over words
ā€¢ Each document is a mixture of corpus-wide topics
ā€¢ Each word is drawn from one of those topics
Probabilistic Topic Models
12
ā€¢ We only observe the documents
ā€¢ The other structure are hidden variables.
ā€¢ Our goal is to infer the hidden variables by computing their distribution
conditioned on the documents:
p(topics, proportions, assignments | documents)
Probabilistic Topic Models
Graphical Models
13
ā€¢ Encodes assumptions
ā€¢ Deļ¬nes a factorization of the joint
distribution
ā€¢ Connects to algorithms for computing
with data
Proportions
parameter
Per-document
topic proportions
Per-word
topic assignment
Observed
word
TopicsDocuments
Topic
parameter
Words
ā€¢ Nodes are random variables
ā€¢ Edges indicate dependence
ā€¢ Shaded nodes are observed
ā€¢ Plates indicate replicated variables
Plate Notation:
Per-topic
word proportions
14
Proportions
parameter
Per-document
topic proportions
Per-word
topic assignment
Observed
word
TopicsDocuments
Topic
parameter
Words
Per-topic
word proportions
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
15
Approximate posterior inference algorithms:
- Mean FieldVariational methods [Blei et al., 2001,2003]
- Expectation Propagation [Minka and Lafferty, 2002]
- Collapsed Gibbs Sampling [Griggiths and Steyvers, 2002]
- CollapsedVariational Inference [Teh et al., 2006]
- OnlineVariational Inference [Hoffman et al., 2010]
Probabilistic Topic Models as Graphical Models
From a collection of documents, infer:
- Per-word topic assignment Zd,n
- Per-document topic proportions Īød
- Per-corpus topic distributions Ī²k
16
Dirichlet Distribution
ā€¢ Exponential family distribution over the simplex, ā€Ø
i.e. positive vectors that sum to oneā€Ø
ā€¢ The parameter š›¼ controls the mean shape and sparsity of š›³
probability distribution of
documents over topics (dt)
probability distribution of
words over topics (wt)
[ dt1, dt2, dt3]
dt1+dt2+dt3 = 1
[ wt1, wt2, wt3]
wt1+wt2+wt3 = 1
Word
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
Topic 1
Topic 2
Topic 3
17
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
š›¼=100
18
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
š›¼=1
19
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
š›¼=0.1
20
Latent Dirichlet Allocation (LDA) [Blei et al, 2003]
š›¼=0.001
21
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
22
http://librairy.linkeddata.es/api
ā– Distributing Text Mining tasks with librAIry. ā€Ø
Badenes-Olmedo, C.; Redondo-Garcia, J.; Corcho, O.ā€Ø
In Proceedings of the 2017 ACM Symposium on Document Engineering (DocEng '17).
ACM, 63-66. DOI: https://doi.org/10.1145/3103010.3121040
password: kcap2017user: oeg
23
ā€¢ kcap:ā€Ø
<librairy-api>/domains/kcapā€Ø
ā€¢ kcap2015:ā€Ø
<librairy-api>/domains/kcap2015ā€Ø
ā€¢ kcap2017: ā€Ø
<librairy-api>/domains/kcap2017
READ
CORPUS
24
ā€¢ Documents per Corpus:ā€Ø
<librairy-api>/domains/kcap/itemsā€Ø
ā€¢ Document Content:ā€Ø
<librairy-api>/items/3707eb49b81fb67e76bf1d40da842275?content=trueā€Ø
ā€¢ Document (noun) Lemmas:ā€Ø
<librairy-api>/items/3707eb49b81fb67e76bf1d40da842275/annotations/lemmaā€Ø
ā€¢ Document Annotations:ā€Ø
<librairy-api>/items/3707eb49b81fb67e76bf1d40da842275/annotationsā€Ø
READ
DOCUMENTS
25
ā€¢ Topics per Corpus:ā€Ø
<librairy-api>/domains/kcap/topics?words=10ā€Ø
ā€¢ Topic Details:ā€Ø
<librairy-api>/domains/kcap/topics/0ā€Ø
ā€¢ Most Representative Documents in a Topic:ā€Ø
<librairy-api>/domains/kcap/topics/0/itemsā€Ø
ā€¢ Topics per Document:ā€Ø
<librairy-api>/domains/kcap/items/d8933e1de1888ccbdf4e0df42c404d7e/topics?
words=5ā€Ø
READ
TOPICS
26
letā€™s create a LDA model
27
ā€¢ Parameters:ā€Ø
ā€Ø
[GET] <librairy-api>/domains/{domainId}/
parametersā€Ø
ā€Ø
[POST]<librairy-api>/domains/{domainId}/
parameters
āœ“ lda.alpha = 0.1
āœ“ lda.beta = 0.01
āœ“ lda.topics = 6
āœ“ lda.vocabulary.size = 10000
āœ“ lda.max.iterations = 100
āœ“ lda.optimizer = manual ā€Ø
(basic, nsga)
āœ“ lda.stopwords =example,service
CREATE
{
"name": "lda.beta",
"value": "0.01"
}
TOPICS
ā€¢ (Re)Train a model: [PUT] <librairy-api>/domains/kcap/topicsā€Ø
28
CREATE
TOPICS
$	git	checkout	lda
$	docker	start	-i	test
ā€¢ Adjust parameter.properties
ā€¢ Move to the ā€˜ldaā€™ stage
ā€¢ Try
$	nano	parameters.properties
ā€¢ Results in ā€˜output/models/lda' folder
29
CREATE
TOPICS
$	dataset.csv.gz							#	texts	
$	model.documents.txt		#	topics	per	document	
$	model.parameters.txt	#	model	settings	
$	model.topics.txt					#	topic	words	
$	model.uris.txt							#	document	URI	
$	model.vocabulary.txt	#	list	of	words	
$	model.words.txt						#	topics	per	word	in	document
ā€¢ See ā€˜output/models/lda/kcap' folder:
30
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
Evaluation and Interpretation
31
Evaluation and Interpretation
32
ā€¢ Synthetic Dataā€Ø
- Because topic models have a generative process, we can generate dataā€Ø
- Latent variables are no longer latent.ā€Ø
- Take it account topic identiļ¬ersā€Ø
- Semi-synthetic data can be a good compromise: train from real data and generate
new documents from that modelā€Ø
ā€¢ Baselines and Metricsā€Ø
- Compare to another model that either has a similar structure or application.ā€Ø
- Perplexity or held-out likelihood: probability of held-out data given the settings of the
modelā€Ø
- Accuracy: prediction performance -> purpose oriented- Accuracy: prediction performance
How well the model ļ¬t the
data to be modelled
Evaluation and Interpretation
33
Chang, J., Gerrish, S.,Wang, C., & Blei, D. M. (2009). ReadingTea Leaves: How Humans InterpretTopic Models.Advances in Neural
Information Processing Systems 22, 288--296
ā€¢ Word Intrusion: how semantically ā€˜cohesiveā€™ the topics inferred by a model are and
tests whether topics correspond to natural groupings for humans (Topic Coherence)ā€Ø
ā€Ø
ā€Ø
ā€Ø
ā€Ø
ā€Ø
ā€¢ Topic Intrusion: how well a topic modelā€™s decomposition of a document as a mixture
of topics agrees with human associations of topics with a document
{ dog, cat, horse, apple, pig, cow}
{ car, teacher, platypus, agile, blue, Zaire}
How interpretable is the
model by a human
34
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
35
ā€¢ Create a Documentā€Ø
[POST] <librairy-api>/items/{id}ā€Ø
ā€¢ Add it to a Domainā€Ø
[POST] <librairy-api>/domains/{domainId}/items/{itemId}ā€Ø
ā€¢ Read Topics per Document:ā€Ø
[GET] <librairy-api>/domains/{domainId}/items/{id}/topicsā€Ø
INFERENCE TOPIC DISTRIBUTIONS
CREATE
{	ā€Ø
		"content":	"string",	
		"language":	"string",	
		"name":	ā€œstring"	
}ā€Ø
36
CREATE
$	git	checkout	inference
ā€¢ Write text in ļ¬le.txt:
$	docker	start	-i	test
ā€¢ Execute:
$	nano	file.txt
ā€¢ Move to the ā€˜inferenceā€™ stage
INFERENCE TOPIC DISTRIBUTIONS
ā€¢ Results in ā€˜output/inferences/lda' folder
37
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
Correlated Topic Model (CTM) [Blei and Lafferty, 2006]
38
M, B. D., & D, L. J. (2006). CorrelatedTopic Models.Advances in Neural Information Processing Systems 18, 147ā€“154
Correlated Topic Model (CTM)
uses a logistic normal distribution for the topic proportions
instead of a Dirichlet distribution to allow representing
correlations between topics
Extending LDA
39
ā€¢ A Dynamic Topic Model (DTM) uses a logistic normal in a linear dynamic model to capture
how topics change over time:
ā€¢ Supervised LDA are topic models of documents and responses, ļ¬t to ļ¬nd topics predictive of
the response
40
COMPARE TOPICS
CREATE
$	git	checkout	trends
ā€¢Move to the ā€˜trendsā€™ stage
$	docker	start	-i	test
ā€¢ Adjust parameter.properties
ā€¢ Execute
$	nano	parameters.properties
ā€¢ Results in ā€˜output/similarities/topics' folder
41
CREATE
$	similarities.csv				#	pairwise	topic	similarities	
$	topics.{domain}.txt	#	topic	words	in	domain	
ā€¢ See the ā€˜output/similarities/topics/ folder:
Kumar, R., & Vassilvitskii, S. (2010). Generalized distances between rankings.
Proceedings of the 19th International Conference on World Wide Web - WWW ā€™10,(3), 571.
http://doi.org/10.1145/1772690.1772749
COMPARE TOPICS
42
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
Historical Domain
43
ā€¢ Finding themes in document that reļ¬‚ect temporal trendsā€Ø
ā€¢ Looking not just for the topical of each time period, but how those topics
shift in concentration as they are inļ¬‚uenced by historical events ā€Ø
ā€¢ Discovering how events are reļ¬‚ected in writing and how ideas and
emotions emerge in response to changing events.
Topic Models in Historical Documents
44
Allen Beye Riddell. How to Read 22,198 Journal Articles: Studying the History
of German Studies with Topic Models, pages 91ā€“113. Camden House, 2012.
A LDA model trained with 150 topics on 22.198 articles from JSTOR in the 20th century, removing
numbers and articles having fewer than 200 words
Historical Domain
45
Allen Beye Riddell. How to Read 22,198 Journal Articles: Studying the History
of German Studies with Topic Models, pages 91ā€“113. Camden House, 2012.
Historical Domain
46
ā€¢ the chronological trajectory of a topic is not the same thing as the trajectory of the
individual words that compose itā€Ø
ā€Ø
ā€¢ individual topics always need to be interpreted in the context of the larger modelā€Ø
ā€Ø
ā€¢ it becomes essential validate the description provided by a topic model by reference to
something other than the topic model itself
Conclusions
Historical Domain
Scientiļ¬c Domain
47
ā€¢ Specialized vocabularies deļ¬ne ļ¬elds of studyā€Ø
ā€¢ Scientiļ¬c documents innovate: unlike other domains, scientiļ¬c documents are not just
reports of news or events; they are news. Innovation is hard to detect and hard to
attributeā€Ø
ā€¢ Science and Policy: understanding scientiļ¬c publications is important for funding
agencies, lawmakers, and the public.
Topic Models in Scientiļ¬c Publications
48
Thomas L. Griffths and Mark Steyvers. Finding scientiļ¬c topics. Proceedings of the National
Academy of Sciences, 101(Suppl 1):5228ā€“5235, 2004
Reconstruct the ofļ¬cial
Proceedings of the National
Academy of Sciences (PNAS)
topics automatically using topic
models
28.154 abstract published in
PNAS from 1991 to 2001
LDA, beta=0.1, alpha=50/K,
K=numTopics=50/100/200../1000
Scientiļ¬c Domain
49
Thomas L. Griffths and Mark Steyvers. Finding scientiļ¬c topics. Proceedings of the National
Academy of Sciences, 101(Suppl 1):5228ā€“5235, 2004
Scientiļ¬c Domain
50
Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social
interactions: how authors effect research. CIKM (2006).
given a seemingly new topic, from where does this topic evolve?
a newly emergent topic is truly new or rather a variation of an old topic?
CiteSeer Dataset
hypothesis
ā€¢ ā€œone topic evolves into another topic when the
corresponding social actors interact with other
actors with different topics in the latent social
network"
Scientiļ¬c Domain
51
Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social
interactions: how authors effect research. CIKM (2006).
Topics with heavy methodology
requirements (e.g. np problem, linear
system) and/or popular topics (e.g. mobile
computing, net- work) are more likely to
remain stable. By contrast, topics closely
related to applications are more likely to
have higher transition probabilities than
other topics (e.g. data mining in database,
security) all things being equal
Scientiļ¬c Domain
52
Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social
interactions: how authors effect research. CIKM (2006).
Scientiļ¬c Domain
53
ā€¢ Understanding science communication allows us to see how our understanding of
nature, technology, and engineering have advanced over the years ā€Ø
ā€Ø
ā€¢ Topic models can capture how these ļ¬elds have changed and have gained additional
knowledge with each new discovery
ā€Ø
ā€¢ As the scientiļ¬c enterprise becomes more distributed and faster moving, these tools
are important for scientists hoping to understand trends and development and for
policy makers who seek to guide innovation
Conclusions
Scientiļ¬c Domain
Fiction and Literature Domain
54
ā€¢ What is a document?:Treating novels as a single bag of words does not work.Topics
resulting from this corpus treatment are overly vague and lack thematic coherence ā€Ø
ā€¢ People and Places: Because most works of ļ¬ction are set in imaginary worlds that do
not exist outside the work itself, they have words such as character names that are
extremely frequent locally but never occur elsewhere
ā€¢ Beyond the Literal: Documents that are valued not just for their information content
but for their artistic expression
ā€¢ Topic models complement close reading in two ways, as a survey method and as a
means for tracing and comparing large-scale patterns
Topic Models in Literature
Fiction and Literature
55
Beyond the Literal
Lia M. Rhody. Topic modeling and ļ¬gurative language. Journal of Digital Humanities, 2(1), 2012
In a corpus of 4600 poems and a sixty-topic model, one of the topics discovered: ā€Ø
{night, light, moon, stars, day, dark, sun, sleep, sky, wind, time, eyes, star, darkness,
bright }
Analyzing the context of the topic (i.e. poems), they are using a metaphor relating night and sleep
to death.
Thus, the topic can be considered as the death as metaphor.
Rhody [2012] demonstrates on a corpus of poetry that although topics do not represent
symbolic meanings, they are a good way of detecting the concrete language associated with
repeated metaphors.
Fiction and Literature
56
ā€¢ Topic models cannot by themselves study literature, but they are useful tools for
scholars studying literature ā€Ø
ā€¢ Literary concepts are complicated, but they often have surprisingly strong statistical
signatures
ā€¢ Models can still be useful in identifying areas of potential interest, even if they donā€™t
ā€œunderstandā€ what they are ļ¬nding
Conclusions
57
Outline
1. Artifacts
2. Parameters
3. Training
4. Evaluation and Interpretation
5. Inference
6. Trends
7. Domains
8. Topic-based Similarity
58
Topic Model
Similarity Valuetopic0: 0.520,
topic1: 0.327
topic2: 0.081
ā€¦
topic122: 0.182
topic1: 0.573,
topic2: 0.172
topic3: 0.136
ā€¦
topic122: 0.099
0.595
..
Topic0 Topic1 Topic2 Topic122
DOCUMENT
ā€˜Aā€™
DOCUMENT
ā€˜Bā€™
Topic-based Similarity
Jensen-Shannon Divergence
59
Representativeness
Badenes-Olmedo, C., Redondo-Garcia, J. L., & Corcho, O. (2017). An Initial Analysis of Topic-based Similarity among Scientiļ¬c Documents
based on their rhetorical discourse parts. In Proceedings of the 1st SEMSCI workshop co-located with ISWC.
Full-Paper
Summary
Internal
External
finding related items
describing main ideas
(JSD-based similarity)
(precision / recall / f-measure)
JSD-based
similarity
JSD-based
similarity
librAIry
60
librAIry
ā€¢ kcap:ā€Ø
ā€Ø
[GET] <librairy-api>/domains/{domainId}/items/{itemId}/relationsā€Ø
RELATIONS
READ
61
$	git	checkout	similarity
CREATE
RELATIONS
ā€¢ Move to the ā€˜similarityā€™ stage:
$	docker	start	-i	test
ā€¢ Adjust parameter.properties
ā€¢ Execute
$	nano	parameters.properties
ā€¢ Results in ā€˜output/models/similarities' folder
62
CREATE
ā€¢ Open ā€˜similarity-network.htmlā€™ in a browser (Firefox)
63
CREATE
ā€¢ Try with input.domain=ecommerce (1000 nodes)
64
References
ā€¢ Jurafsky, D., & Martin, J. H. (2016). Language Modeling with N- grams. Speech and Language Processing, 1ā€“28.ā€Ø
ā€¢ Jordan Boyd-Graber, Yuening Hu and David Mimno (2017), "Applications of Topic Models", Foundations and
TrendsĀ® in Information Retrieval: Vol. 11: No. 2-3, pp 143-296ā€Ø
ā€¢ Blei, David M., Lawrence Carin and David B. Dunson. ā€œProbabilistic Topic Models.ā€ IEEE Signal Processing
Magazine 27 (2010): 55-65.ā€Ø
ā€¢ Zhai, ChengXiang. ā€œProbabilistic Topic Models for Text Data Retrieval and Analysis.ā€ SIGIR (2017).ā€Ø
ā€¢ Wang, C., Blei, D., & Heckerman, D. (2008). Continuous Time Dynamic Topic Models. Proc of UAI, 579ā€“586.ā€Ø
ā€¢ Chang, J., Gerrish, S., Wang, C., & Blei, D. M. (2009). Reading Tea Leaves: How Humans Interpret Topic Models.
Advances in Neural Information Processing Systems 22, 288--296. ā€Ø
ā€¢ Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(4ā€“
5), 993ā€“1022. ā€Ø
ā€¢ Cheng, X., Yan, X., Lan, Y., & Guo, J. (2014). BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering,
IEEE Transactionsā€Ø
ā€¢ Blei, D. M., & Lafferty, J. D. (2007). A correlated topic model of Science. The Annals of Applied Statistics, 1(1), 17ā€“
35. ā€Ø
ā€¢ Grifļ¬ths, T. L., Jordan, M. I., Tenenbaum, J. B., & Blei, D. M. (2004). Hierarchical Topic Models and the Nested
Chinese Restaurant Process. Advances in Neural Information Processing Systems, 17ā€“24.ā€Ø
ā€¢ Badenes-Olmedo, C.; Redondo-Garcia, J.; Corcho, O. (2017) Distributing Text Mining tasks with librAIry. In
Proceedings of the 2017 ACM Symposium on Document Engineering (DocEng '17). ACM, 63-66.
Probabilistic
Topic Models
Badenes-Olmedo, Carlos
Corcho, Oscar
Ontology Engineering Group (OEG)
Universidad PolitƩcnica de Madrid (UPM)
K-CAP 2017
Knowledge Capture

December 4th-6th, 2017

Austin, Texas, United States
cbadenes@fi.upm.es
@carbadol
github.com/librairy
oeg-upm.net

More Related Content

What's hot

Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)
Bhaskar Mitra
Ā 
From Open Linked Data towards an Ecosystem of Interlinked Knowledge
From Open Linked Data towards an Ecosystem of Interlinked KnowledgeFrom Open Linked Data towards an Ecosystem of Interlinked Knowledge
From Open Linked Data towards an Ecosystem of Interlinked Knowledge
Sƶren Auer
Ā 
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Paris Sud University
Ā 

What's hot (15)

Relations for Reusing (R4R) in A Shared Context: An Exploration on Research P...
Relations for Reusing (R4R) in A Shared Context: An Exploration on Research P...Relations for Reusing (R4R) in A Shared Context: An Exploration on Research P...
Relations for Reusing (R4R) in A Shared Context: An Exploration on Research P...
Ā 
Progress Towards Leveraging Natural Language Processing for Collecting Experi...
Progress Towards Leveraging Natural Language Processing for Collecting Experi...Progress Towards Leveraging Natural Language Processing for Collecting Experi...
Progress Towards Leveraging Natural Language Processing for Collecting Experi...
Ā 
LOTUS: Adaptive Text Search for Big Linked Data
LOTUS: Adaptive Text Search for Big Linked DataLOTUS: Adaptive Text Search for Big Linked Data
LOTUS: Adaptive Text Search for Big Linked Data
Ā 
Visualising the Australian open data and research data landscape
Visualising the Australian open data and research data landscapeVisualising the Australian open data and research data landscape
Visualising the Australian open data and research data landscape
Ā 
Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)Dual Embedding Space Model (DESM)
Dual Embedding Space Model (DESM)
Ā 
How to clean data less through Linked (Open Data) approach?
How to clean data less through Linked (Open Data) approach?How to clean data less through Linked (Open Data) approach?
How to clean data less through Linked (Open Data) approach?
Ā 
Learning Systems for Science
Learning Systems for ScienceLearning Systems for Science
Learning Systems for Science
Ā 
From Open Linked Data towards an Ecosystem of Interlinked Knowledge
From Open Linked Data towards an Ecosystem of Interlinked KnowledgeFrom Open Linked Data towards an Ecosystem of Interlinked Knowledge
From Open Linked Data towards an Ecosystem of Interlinked Knowledge
Ā 
20160818 Semantics and Linkage of Archived Catalogs
20160818 Semantics and Linkage of Archived Catalogs20160818 Semantics and Linkage of Archived Catalogs
20160818 Semantics and Linkage of Archived Catalogs
Ā 
Accelerating materials design through natural language processing
Accelerating materials design through natural language processingAccelerating materials design through natural language processing
Accelerating materials design through natural language processing
Ā 
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Tutorial@BDA 2017 -- Knowledge Graph Expansion and Enrichment
Ā 
A Linked Data Prototype for the Union Catalog of Digital Archives Taiwan
A Linked Data Prototype for the Union Catalog of Digital Archives TaiwanA Linked Data Prototype for the Union Catalog of Digital Archives Taiwan
A Linked Data Prototype for the Union Catalog of Digital Archives Taiwan
Ā 
Australian Open government and research data pilot survey 2017
Australian Open government and research data pilot survey 2017Australian Open government and research data pilot survey 2017
Australian Open government and research data pilot survey 2017
Ā 
Lightning fast genomics with Spark, Adam and Scala
Lightning fast genomics with Spark, Adam and ScalaLightning fast genomics with Spark, Adam and Scala
Lightning fast genomics with Spark, Adam and Scala
Ā 
Materials design using knowledge from millions of journal articles via natura...
Materials design using knowledge from millions of journal articles via natura...Materials design using knowledge from millions of journal articles via natura...
Materials design using knowledge from millions of journal articles via natura...
Ā 

Similar to Probabilistic Topic models

TopicModels_BleiPaper_Summary.pptx
TopicModels_BleiPaper_Summary.pptxTopicModels_BleiPaper_Summary.pptx
TopicModels_BleiPaper_Summary.pptx
Kalpit Desai
Ā 
The Rhetoric of Research Objects
The Rhetoric of Research ObjectsThe Rhetoric of Research Objects
The Rhetoric of Research Objects
Carole Goble
Ā 
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
rchbeir
Ā 
Semantics-enhanced Cyberinfrastructure for ICMSE : Interoperability, Analyti...
Semantics-enhanced Cyberinfrastructure for ICMSE :  Interoperability, Analyti...Semantics-enhanced Cyberinfrastructure for ICMSE :  Interoperability, Analyti...
Semantics-enhanced Cyberinfrastructure for ICMSE : Interoperability, Analyti...
Artificial Intelligence Institute at UofSC
Ā 

Similar to Probabilistic Topic models (20)

Knowledge Graph Construction and the Role of DBPedia
Knowledge Graph Construction and the Role of DBPediaKnowledge Graph Construction and the Role of DBPedia
Knowledge Graph Construction and the Role of DBPedia
Ā 
TopicModels_BleiPaper_Summary.pptx
TopicModels_BleiPaper_Summary.pptxTopicModels_BleiPaper_Summary.pptx
TopicModels_BleiPaper_Summary.pptx
Ā 
A Data Ecosystem to Support Machine Learning in Materials Science
A Data Ecosystem to Support Machine Learning in Materials ScienceA Data Ecosystem to Support Machine Learning in Materials Science
A Data Ecosystem to Support Machine Learning in Materials Science
Ā 
Research Objects: more than the sum of the parts
Research Objects: more than the sum of the partsResearch Objects: more than the sum of the parts
Research Objects: more than the sum of the parts
Ā 
The Rhetoric of Research Objects
The Rhetoric of Research ObjectsThe Rhetoric of Research Objects
The Rhetoric of Research Objects
Ā 
Topic Extraction on Domain Ontology
Topic Extraction on Domain OntologyTopic Extraction on Domain Ontology
Topic Extraction on Domain Ontology
Ā 
CiteSeerX: Mining Scholarly Big Data
CiteSeerX: Mining Scholarly Big DataCiteSeerX: Mining Scholarly Big Data
CiteSeerX: Mining Scholarly Big Data
Ā 
Data Science Keys to Open Up OpenNASA Datasets
Data Science Keys to Open Up OpenNASA DatasetsData Science Keys to Open Up OpenNASA Datasets
Data Science Keys to Open Up OpenNASA Datasets
Ā 
Data Science Keys to Open Up OpenNASA Datasets - PyData New York 2017
Data Science Keys to Open Up OpenNASA Datasets - PyData New York 2017Data Science Keys to Open Up OpenNASA Datasets - PyData New York 2017
Data Science Keys to Open Up OpenNASA Datasets - PyData New York 2017
Ā 
Sybrandt Thesis Proposal Presentation
Sybrandt Thesis Proposal PresentationSybrandt Thesis Proposal Presentation
Sybrandt Thesis Proposal Presentation
Ā 
Pemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptxPemanfaatan Big Data Dalam Riset 2023.pptx
Pemanfaatan Big Data Dalam Riset 2023.pptx
Ā 
Discovery Engines for Big Data: Accelerating Discovery in Basic Energy Sciences
Discovery Engines for Big Data: Accelerating Discovery in Basic Energy SciencesDiscovery Engines for Big Data: Accelerating Discovery in Basic Energy Sciences
Discovery Engines for Big Data: Accelerating Discovery in Basic Energy Sciences
Ā 
The Research Object Initiative: Frameworks and Use Cases
The Research Object Initiative:Frameworks and Use CasesThe Research Object Initiative:Frameworks and Use Cases
The Research Object Initiative: Frameworks and Use Cases
Ā 
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
Ā 
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...What Are Links in Linked Open Data? A Characterization and Evaluation of Link...
What Are Links in Linked Open Data? A Characterization and Evaluation of Link...
Ā 
Introducing the Whole Tale Project: Merging Science and Cyberinfrastructure P...
Introducing the Whole Tale Project: Merging Science and Cyberinfrastructure P...Introducing the Whole Tale Project: Merging Science and Cyberinfrastructure P...
Introducing the Whole Tale Project: Merging Science and Cyberinfrastructure P...
Ā 
Knowledge Graph Research and Innovation Challenges
Knowledge Graph Research and Innovation ChallengesKnowledge Graph Research and Innovation Challenges
Knowledge Graph Research and Innovation Challenges
Ā 
Semantics-enhanced Cyberinfrastructure for ICMSE : Interoperability, Analyti...
Semantics-enhanced Cyberinfrastructure for ICMSE :  Interoperability, Analyti...Semantics-enhanced Cyberinfrastructure for ICMSE :  Interoperability, Analyti...
Semantics-enhanced Cyberinfrastructure for ICMSE : Interoperability, Analyti...
Ā 
A Clean Slate?
A Clean Slate?A Clean Slate?
A Clean Slate?
Ā 
The Future of Semantics on the Web
The Future of Semantics on the WebThe Future of Semantics on the Web
The Future of Semantics on the Web
Ā 

More from Carlos Badenes-Olmedo

More from Carlos Badenes-Olmedo (11)

NLP and Knowledge Graphs
NLP and Knowledge GraphsNLP and Knowledge Graphs
NLP and Knowledge Graphs
Ā 
Semantically-enabled Browsing of Large Multilingual Document Collections
Semantically-enabled Browsing of Large Multilingual Document CollectionsSemantically-enabled Browsing of Large Multilingual Document Collections
Semantically-enabled Browsing of Large Multilingual Document Collections
Ā 
Scalable Cross-lingual Document Similarity through Language-specific Concept ...
Scalable Cross-lingual Document Similarity through Language-specific Concept ...Scalable Cross-lingual Document Similarity through Language-specific Concept ...
Scalable Cross-lingual Document Similarity through Language-specific Concept ...
Ā 
Crosslingual search-engine
Crosslingual search-engineCrosslingual search-engine
Crosslingual search-engine
Ā 
Cross-lingual Similarity
Cross-lingual SimilarityCross-lingual Similarity
Cross-lingual Similarity
Ā 
Multilingual searchapi
Multilingual searchapiMultilingual searchapi
Multilingual searchapi
Ā 
Multilingual document analysis
Multilingual document analysisMultilingual document analysis
Multilingual document analysis
Ā 
Topic Models Exploration
Topic Models ExplorationTopic Models Exploration
Topic Models Exploration
Ā 
Docker Introduction
Docker IntroductionDocker Introduction
Docker Introduction
Ā 
Distributing Text Mining tasks with librAIry
Distributing Text Mining tasks with librAIryDistributing Text Mining tasks with librAIry
Distributing Text Mining tasks with librAIry
Ā 
Efficient Clustering from Distributions over Topics
Efficient Clustering from Distributions over TopicsEfficient Clustering from Distributions over Topics
Efficient Clustering from Distributions over Topics
Ā 

Recently uploaded

Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
Ā 
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
amitlee9823
Ā 
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service BangaloreCall Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
amitlee9823
Ā 
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
amitlee9823
Ā 
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
amitlee9823
Ā 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
Ā 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
JoseMangaJr1
Ā 
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
amitlee9823
Ā 

Recently uploaded (20)

Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Ā 
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Ba...
Ā 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
Ā 
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service BangaloreCall Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Call Girls Begur Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service Bangalore
Ā 
Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptx
Ā 
ELKO dropshipping via API with DroFx.pptx
ELKO dropshipping via API with DroFx.pptxELKO dropshipping via API with DroFx.pptx
ELKO dropshipping via API with DroFx.pptx
Ā 
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: šŸ“ 7737669865 šŸ“ High Profile Model Escorts | Bangalore ...
Ā 
Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
Ā 
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night StandCall Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Call Girls In Bellandur ā˜Ž 7737669865 šŸ„µ Book Your One night Stand
Ā 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
Ā 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Ā 
Anomaly detection and data imputation within time series
Anomaly detection and data imputation within time seriesAnomaly detection and data imputation within time series
Anomaly detection and data imputation within time series
Ā 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
Ā 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Ā 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptx
Ā 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
Ā 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Ā 
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Call Girls Indiranagar Just Call šŸ‘— 7737669865 šŸ‘— Top Class Call Girl Service B...
Ā 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
Ā 
BDSMāš”Call Girls in Mandawali Delhi >ą¼’8448380779 Escort Service
BDSMāš”Call Girls in Mandawali Delhi >ą¼’8448380779 Escort ServiceBDSMāš”Call Girls in Mandawali Delhi >ą¼’8448380779 Escort Service
BDSMāš”Call Girls in Mandawali Delhi >ą¼’8448380779 Escort Service
Ā 

Probabilistic Topic models

  • 1. Badenes-Olmedo, Carlos Corcho, Oscar Ontology Engineering Group (OEG) Universidad PolitĆ©cnica de Madrid (UPM) Probabilistic Topic Models K-CAP 2017 Knowledge Capture December 4th-6th, 2017 Austin, Texas, United States cbadenes@fi.upm.es @carbadol github.com/librairy oeg-upm.net
  • 2. 2 get ready to play .. $ git clone git@github.com:librairy/tutorial.git . $ docker run --name test -v "$PWD":/src librairy/compiler
  • 3. 3 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 6. 6 ontology 0.011696399528999426 query 0.00840189384297452 datum 0.007648923027408832 example 0.007628095111872131 knowledge 0.006779803572327723 page 0.006705717576295264 event 0.0066423608486780505 case 0.006638825784874849 dataset 0.006361954166489025 concept 0.005970016686794867 .. .. class 0.0002888599713777317 6 extraction 0.0002747438325633771 6 datum 0.0002738706330554531 section 0.000268698920616526 example 0.0002680552015363467 6 event 0.0002648849674904949 dataset 0.0002602128572589013 5 case 0.0002595933993586539 result 0.0002585102532654629 entity 0.0002571243653838122 5 .. .. Topics dataset event
  • 7. 7 ontology 0.01169639952899942 6query 0.00840189384297452 datum 0.00764892302740883 2example 0.00762809511187213 1knowledge 0.00677980357232772 3page 0.00670571757629526 4event 0.00664236084867805 05case 0.00663882578487484 9dataset 0.00636195416648902 5concept 0.00597001668679486 7.. .. class 0.00028885997137773 176extraction 0.00027474383256337 716datum 0.00027387063305545 31section 0.00026869892061652 6example 0.00026805520153634 676event 0.00026488496749049 49dataset 0.00026021285725890 135case 0.00025959339935865 39result 0.00025851025326546 29entity 0.00025712436538381 225.. .. Documents dataset event
  • 8. 8 Representational Model [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.6, 0.3, 0.1] [ 0.2, 0.2, 0.6] [ 0.4, 0.3, 0.3] [ T0, T1, T2] dataset event term datum concept knowledge TOPICS WORDS DOCUMENTS Topic Model
  • 9. 9 Visualization ā€¢ PMLA items from JSTOR's Data for Research service ā€¢ restricted to items categorized as ā€œfull-length articlesā€ with more than 2000 words ā€¢ 5605 articles out of the 9200 items from the years 1889ā€“2007 ā€¢ LDA, 64 topics
  • 10. 10 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 11. 11 ā€¢ Each topic is a distribution over words ā€¢ Each document is a mixture of corpus-wide topics ā€¢ Each word is drawn from one of those topics Probabilistic Topic Models
  • 12. 12 ā€¢ We only observe the documents ā€¢ The other structure are hidden variables. ā€¢ Our goal is to infer the hidden variables by computing their distribution conditioned on the documents: p(topics, proportions, assignments | documents) Probabilistic Topic Models
  • 13. Graphical Models 13 ā€¢ Encodes assumptions ā€¢ Deļ¬nes a factorization of the joint distribution ā€¢ Connects to algorithms for computing with data Proportions parameter Per-document topic proportions Per-word topic assignment Observed word TopicsDocuments Topic parameter Words ā€¢ Nodes are random variables ā€¢ Edges indicate dependence ā€¢ Shaded nodes are observed ā€¢ Plates indicate replicated variables Plate Notation: Per-topic word proportions
  • 15. 15 Approximate posterior inference algorithms: - Mean FieldVariational methods [Blei et al., 2001,2003] - Expectation Propagation [Minka and Lafferty, 2002] - Collapsed Gibbs Sampling [Griggiths and Steyvers, 2002] - CollapsedVariational Inference [Teh et al., 2006] - OnlineVariational Inference [Hoffman et al., 2010] Probabilistic Topic Models as Graphical Models From a collection of documents, infer: - Per-word topic assignment Zd,n - Per-document topic proportions Īød - Per-corpus topic distributions Ī²k
  • 16. 16 Dirichlet Distribution ā€¢ Exponential family distribution over the simplex, ā€Ø i.e. positive vectors that sum to oneā€Ø ā€¢ The parameter š›¼ controls the mean shape and sparsity of š›³ probability distribution of documents over topics (dt) probability distribution of words over topics (wt) [ dt1, dt2, dt3] dt1+dt2+dt3 = 1 [ wt1, wt2, wt3] wt1+wt2+wt3 = 1 Word Latent Dirichlet Allocation (LDA) [Blei et al, 2003] Topic 1 Topic 2 Topic 3
  • 17. 17 Latent Dirichlet Allocation (LDA) [Blei et al, 2003] š›¼=100
  • 18. 18 Latent Dirichlet Allocation (LDA) [Blei et al, 2003] š›¼=1
  • 19. 19 Latent Dirichlet Allocation (LDA) [Blei et al, 2003] š›¼=0.1
  • 20. 20 Latent Dirichlet Allocation (LDA) [Blei et al, 2003] š›¼=0.001
  • 21. 21 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 22. 22 http://librairy.linkeddata.es/api ā– Distributing Text Mining tasks with librAIry. ā€Ø Badenes-Olmedo, C.; Redondo-Garcia, J.; Corcho, O.ā€Ø In Proceedings of the 2017 ACM Symposium on Document Engineering (DocEng '17). ACM, 63-66. DOI: https://doi.org/10.1145/3103010.3121040 password: kcap2017user: oeg
  • 24. 24 ā€¢ Documents per Corpus:ā€Ø <librairy-api>/domains/kcap/itemsā€Ø ā€¢ Document Content:ā€Ø <librairy-api>/items/3707eb49b81fb67e76bf1d40da842275?content=trueā€Ø ā€¢ Document (noun) Lemmas:ā€Ø <librairy-api>/items/3707eb49b81fb67e76bf1d40da842275/annotations/lemmaā€Ø ā€¢ Document Annotations:ā€Ø <librairy-api>/items/3707eb49b81fb67e76bf1d40da842275/annotationsā€Ø READ DOCUMENTS
  • 25. 25 ā€¢ Topics per Corpus:ā€Ø <librairy-api>/domains/kcap/topics?words=10ā€Ø ā€¢ Topic Details:ā€Ø <librairy-api>/domains/kcap/topics/0ā€Ø ā€¢ Most Representative Documents in a Topic:ā€Ø <librairy-api>/domains/kcap/topics/0/itemsā€Ø ā€¢ Topics per Document:ā€Ø <librairy-api>/domains/kcap/items/d8933e1de1888ccbdf4e0df42c404d7e/topics? words=5ā€Ø READ TOPICS
  • 27. 27 ā€¢ Parameters:ā€Ø ā€Ø [GET] <librairy-api>/domains/{domainId}/ parametersā€Ø ā€Ø [POST]<librairy-api>/domains/{domainId}/ parameters āœ“ lda.alpha = 0.1 āœ“ lda.beta = 0.01 āœ“ lda.topics = 6 āœ“ lda.vocabulary.size = 10000 āœ“ lda.max.iterations = 100 āœ“ lda.optimizer = manual ā€Ø (basic, nsga) āœ“ lda.stopwords =example,service CREATE { "name": "lda.beta", "value": "0.01" } TOPICS ā€¢ (Re)Train a model: [PUT] <librairy-api>/domains/kcap/topicsā€Ø
  • 28. 28 CREATE TOPICS $ git checkout lda $ docker start -i test ā€¢ Adjust parameter.properties ā€¢ Move to the ā€˜ldaā€™ stage ā€¢ Try $ nano parameters.properties ā€¢ Results in ā€˜output/models/lda' folder
  • 30. 30 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 32. Evaluation and Interpretation 32 ā€¢ Synthetic Dataā€Ø - Because topic models have a generative process, we can generate dataā€Ø - Latent variables are no longer latent.ā€Ø - Take it account topic identiļ¬ersā€Ø - Semi-synthetic data can be a good compromise: train from real data and generate new documents from that modelā€Ø ā€¢ Baselines and Metricsā€Ø - Compare to another model that either has a similar structure or application.ā€Ø - Perplexity or held-out likelihood: probability of held-out data given the settings of the modelā€Ø - Accuracy: prediction performance -> purpose oriented- Accuracy: prediction performance How well the model ļ¬t the data to be modelled
  • 33. Evaluation and Interpretation 33 Chang, J., Gerrish, S.,Wang, C., & Blei, D. M. (2009). ReadingTea Leaves: How Humans InterpretTopic Models.Advances in Neural Information Processing Systems 22, 288--296 ā€¢ Word Intrusion: how semantically ā€˜cohesiveā€™ the topics inferred by a model are and tests whether topics correspond to natural groupings for humans (Topic Coherence)ā€Ø ā€Ø ā€Ø ā€Ø ā€Ø ā€Ø ā€¢ Topic Intrusion: how well a topic modelā€™s decomposition of a document as a mixture of topics agrees with human associations of topics with a document { dog, cat, horse, apple, pig, cow} { car, teacher, platypus, agile, blue, Zaire} How interpretable is the model by a human
  • 34. 34 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 35. 35 ā€¢ Create a Documentā€Ø [POST] <librairy-api>/items/{id}ā€Ø ā€¢ Add it to a Domainā€Ø [POST] <librairy-api>/domains/{domainId}/items/{itemId}ā€Ø ā€¢ Read Topics per Document:ā€Ø [GET] <librairy-api>/domains/{domainId}/items/{id}/topicsā€Ø INFERENCE TOPIC DISTRIBUTIONS CREATE { ā€Ø "content": "string", "language": "string", "name": ā€œstring" }ā€Ø
  • 36. 36 CREATE $ git checkout inference ā€¢ Write text in ļ¬le.txt: $ docker start -i test ā€¢ Execute: $ nano file.txt ā€¢ Move to the ā€˜inferenceā€™ stage INFERENCE TOPIC DISTRIBUTIONS ā€¢ Results in ā€˜output/inferences/lda' folder
  • 37. 37 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 38. Correlated Topic Model (CTM) [Blei and Lafferty, 2006] 38 M, B. D., & D, L. J. (2006). CorrelatedTopic Models.Advances in Neural Information Processing Systems 18, 147ā€“154 Correlated Topic Model (CTM) uses a logistic normal distribution for the topic proportions instead of a Dirichlet distribution to allow representing correlations between topics
  • 39. Extending LDA 39 ā€¢ A Dynamic Topic Model (DTM) uses a logistic normal in a linear dynamic model to capture how topics change over time: ā€¢ Supervised LDA are topic models of documents and responses, ļ¬t to ļ¬nd topics predictive of the response
  • 40. 40 COMPARE TOPICS CREATE $ git checkout trends ā€¢Move to the ā€˜trendsā€™ stage $ docker start -i test ā€¢ Adjust parameter.properties ā€¢ Execute $ nano parameters.properties ā€¢ Results in ā€˜output/similarities/topics' folder
  • 41. 41 CREATE $ similarities.csv # pairwise topic similarities $ topics.{domain}.txt # topic words in domain ā€¢ See the ā€˜output/similarities/topics/ folder: Kumar, R., & Vassilvitskii, S. (2010). Generalized distances between rankings. Proceedings of the 19th International Conference on World Wide Web - WWW ā€™10,(3), 571. http://doi.org/10.1145/1772690.1772749 COMPARE TOPICS
  • 42. 42 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 43. Historical Domain 43 ā€¢ Finding themes in document that reļ¬‚ect temporal trendsā€Ø ā€¢ Looking not just for the topical of each time period, but how those topics shift in concentration as they are inļ¬‚uenced by historical events ā€Ø ā€¢ Discovering how events are reļ¬‚ected in writing and how ideas and emotions emerge in response to changing events. Topic Models in Historical Documents
  • 44. 44 Allen Beye Riddell. How to Read 22,198 Journal Articles: Studying the History of German Studies with Topic Models, pages 91ā€“113. Camden House, 2012. A LDA model trained with 150 topics on 22.198 articles from JSTOR in the 20th century, removing numbers and articles having fewer than 200 words Historical Domain
  • 45. 45 Allen Beye Riddell. How to Read 22,198 Journal Articles: Studying the History of German Studies with Topic Models, pages 91ā€“113. Camden House, 2012. Historical Domain
  • 46. 46 ā€¢ the chronological trajectory of a topic is not the same thing as the trajectory of the individual words that compose itā€Ø ā€Ø ā€¢ individual topics always need to be interpreted in the context of the larger modelā€Ø ā€Ø ā€¢ it becomes essential validate the description provided by a topic model by reference to something other than the topic model itself Conclusions Historical Domain
  • 47. Scientiļ¬c Domain 47 ā€¢ Specialized vocabularies deļ¬ne ļ¬elds of studyā€Ø ā€¢ Scientiļ¬c documents innovate: unlike other domains, scientiļ¬c documents are not just reports of news or events; they are news. Innovation is hard to detect and hard to attributeā€Ø ā€¢ Science and Policy: understanding scientiļ¬c publications is important for funding agencies, lawmakers, and the public. Topic Models in Scientiļ¬c Publications
  • 48. 48 Thomas L. Griffths and Mark Steyvers. Finding scientiļ¬c topics. Proceedings of the National Academy of Sciences, 101(Suppl 1):5228ā€“5235, 2004 Reconstruct the ofļ¬cial Proceedings of the National Academy of Sciences (PNAS) topics automatically using topic models 28.154 abstract published in PNAS from 1991 to 2001 LDA, beta=0.1, alpha=50/K, K=numTopics=50/100/200../1000 Scientiļ¬c Domain
  • 49. 49 Thomas L. Griffths and Mark Steyvers. Finding scientiļ¬c topics. Proceedings of the National Academy of Sciences, 101(Suppl 1):5228ā€“5235, 2004 Scientiļ¬c Domain
  • 50. 50 Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social interactions: how authors effect research. CIKM (2006). given a seemingly new topic, from where does this topic evolve? a newly emergent topic is truly new or rather a variation of an old topic? CiteSeer Dataset hypothesis ā€¢ ā€œone topic evolves into another topic when the corresponding social actors interact with other actors with different topics in the latent social network" Scientiļ¬c Domain
  • 51. 51 Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social interactions: how authors effect research. CIKM (2006). Topics with heavy methodology requirements (e.g. np problem, linear system) and/or popular topics (e.g. mobile computing, net- work) are more likely to remain stable. By contrast, topics closely related to applications are more likely to have higher transition probabilities than other topics (e.g. data mining in database, security) all things being equal Scientiļ¬c Domain
  • 52. 52 Zhou, Ding, Xiang Ji, Hongyuan Zha and C. Lee Giles. Topic evolution and social interactions: how authors effect research. CIKM (2006). Scientiļ¬c Domain
  • 53. 53 ā€¢ Understanding science communication allows us to see how our understanding of nature, technology, and engineering have advanced over the years ā€Ø ā€Ø ā€¢ Topic models can capture how these ļ¬elds have changed and have gained additional knowledge with each new discovery ā€Ø ā€¢ As the scientiļ¬c enterprise becomes more distributed and faster moving, these tools are important for scientists hoping to understand trends and development and for policy makers who seek to guide innovation Conclusions Scientiļ¬c Domain
  • 54. Fiction and Literature Domain 54 ā€¢ What is a document?:Treating novels as a single bag of words does not work.Topics resulting from this corpus treatment are overly vague and lack thematic coherence ā€Ø ā€¢ People and Places: Because most works of ļ¬ction are set in imaginary worlds that do not exist outside the work itself, they have words such as character names that are extremely frequent locally but never occur elsewhere ā€¢ Beyond the Literal: Documents that are valued not just for their information content but for their artistic expression ā€¢ Topic models complement close reading in two ways, as a survey method and as a means for tracing and comparing large-scale patterns Topic Models in Literature
  • 55. Fiction and Literature 55 Beyond the Literal Lia M. Rhody. Topic modeling and ļ¬gurative language. Journal of Digital Humanities, 2(1), 2012 In a corpus of 4600 poems and a sixty-topic model, one of the topics discovered: ā€Ø {night, light, moon, stars, day, dark, sun, sleep, sky, wind, time, eyes, star, darkness, bright } Analyzing the context of the topic (i.e. poems), they are using a metaphor relating night and sleep to death. Thus, the topic can be considered as the death as metaphor. Rhody [2012] demonstrates on a corpus of poetry that although topics do not represent symbolic meanings, they are a good way of detecting the concrete language associated with repeated metaphors.
  • 56. Fiction and Literature 56 ā€¢ Topic models cannot by themselves study literature, but they are useful tools for scholars studying literature ā€Ø ā€¢ Literary concepts are complicated, but they often have surprisingly strong statistical signatures ā€¢ Models can still be useful in identifying areas of potential interest, even if they donā€™t ā€œunderstandā€ what they are ļ¬nding Conclusions
  • 57. 57 Outline 1. Artifacts 2. Parameters 3. Training 4. Evaluation and Interpretation 5. Inference 6. Trends 7. Domains 8. Topic-based Similarity
  • 58. 58 Topic Model Similarity Valuetopic0: 0.520, topic1: 0.327 topic2: 0.081 ā€¦ topic122: 0.182 topic1: 0.573, topic2: 0.172 topic3: 0.136 ā€¦ topic122: 0.099 0.595 .. Topic0 Topic1 Topic2 Topic122 DOCUMENT ā€˜Aā€™ DOCUMENT ā€˜Bā€™ Topic-based Similarity Jensen-Shannon Divergence
  • 59. 59 Representativeness Badenes-Olmedo, C., Redondo-Garcia, J. L., & Corcho, O. (2017). An Initial Analysis of Topic-based Similarity among Scientiļ¬c Documents based on their rhetorical discourse parts. In Proceedings of the 1st SEMSCI workshop co-located with ISWC. Full-Paper Summary Internal External finding related items describing main ideas (JSD-based similarity) (precision / recall / f-measure) JSD-based similarity JSD-based similarity librAIry
  • 61. 61 $ git checkout similarity CREATE RELATIONS ā€¢ Move to the ā€˜similarityā€™ stage: $ docker start -i test ā€¢ Adjust parameter.properties ā€¢ Execute $ nano parameters.properties ā€¢ Results in ā€˜output/models/similarities' folder
  • 63. 63 CREATE ā€¢ Try with input.domain=ecommerce (1000 nodes)
  • 64. 64 References ā€¢ Jurafsky, D., & Martin, J. H. (2016). Language Modeling with N- grams. Speech and Language Processing, 1ā€“28.ā€Ø ā€¢ Jordan Boyd-Graber, Yuening Hu and David Mimno (2017), "Applications of Topic Models", Foundations and TrendsĀ® in Information Retrieval: Vol. 11: No. 2-3, pp 143-296ā€Ø ā€¢ Blei, David M., Lawrence Carin and David B. Dunson. ā€œProbabilistic Topic Models.ā€ IEEE Signal Processing Magazine 27 (2010): 55-65.ā€Ø ā€¢ Zhai, ChengXiang. ā€œProbabilistic Topic Models for Text Data Retrieval and Analysis.ā€ SIGIR (2017).ā€Ø ā€¢ Wang, C., Blei, D., & Heckerman, D. (2008). Continuous Time Dynamic Topic Models. Proc of UAI, 579ā€“586.ā€Ø ā€¢ Chang, J., Gerrish, S., Wang, C., & Blei, D. M. (2009). Reading Tea Leaves: How Humans Interpret Topic Models. Advances in Neural Information Processing Systems 22, 288--296. ā€Ø ā€¢ Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(4ā€“ 5), 993ā€“1022. ā€Ø ā€¢ Cheng, X., Yan, X., Lan, Y., & Guo, J. (2014). BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering, IEEE Transactionsā€Ø ā€¢ Blei, D. M., & Lafferty, J. D. (2007). A correlated topic model of Science. The Annals of Applied Statistics, 1(1), 17ā€“ 35. ā€Ø ā€¢ Grifļ¬ths, T. L., Jordan, M. I., Tenenbaum, J. B., & Blei, D. M. (2004). Hierarchical Topic Models and the Nested Chinese Restaurant Process. Advances in Neural Information Processing Systems, 17ā€“24.ā€Ø ā€¢ Badenes-Olmedo, C.; Redondo-Garcia, J.; Corcho, O. (2017) Distributing Text Mining tasks with librAIry. In Proceedings of the 2017 ACM Symposium on Document Engineering (DocEng '17). ACM, 63-66.
  • 65. Probabilistic Topic Models Badenes-Olmedo, Carlos Corcho, Oscar Ontology Engineering Group (OEG) Universidad PolitĆ©cnica de Madrid (UPM) K-CAP 2017 Knowledge Capture December 4th-6th, 2017 Austin, Texas, United States cbadenes@fi.upm.es @carbadol github.com/librairy oeg-upm.net