Audio Music Similarity is a task within Music Information Retrieval that deals with systems that retrieve songs musically similar to a query song according to their audio content. Evaluation experiments are the main scientific tool in Information Retrieval to determine what systems work better and advance the state of the art accordingly. It is therefore essential that the conclusions drawn from these experiments are both valid and reliable, and that we can reach them at a low cost. This dissertation studies these three aspects of evaluation experiments for the particular case of Audio Music Similarity, with the general goal of improving how these systems are evaluated. The traditional paradigm for Information Retrieval evaluation based on test collections is approached as an statistical estimator of certain probability distributions that characterize how users employ systems. In terms of validity, we study how well the measured system distributions correspond to the target user distributions, and how this correspondence affects the conclusions we draw from an experiment. In terms of reliability, we study the optimal characteristics of test collections and statistical procedures, and in terms of efficiency we study models and methods to greatly reduce the cost of running an evaluation experiment.
4. Information Retrieval
• Automatic representation, storage and search of
unstructured information
– Traditionally textual information
– Lately multimedia too: images, video, music
• A user has an information need and uses an IR
system that retrieves the relevant or significant
information from a collection of documents
4
5. Information Retrieval Evaluation
• IR systems are based on models to estimate
relevance, implementing different techniques
• How good is my system? What system is better?
• Answered with IR Evaluation experiments
– “if you can’t measure it, you can’t improve it”
– But we need to be able to trust our measurements
• Research on IR Evaluation
– Improve our methods to evaluate systems
– Critical for the correct development of the field
5
6. History of IR Evaluation research
MEDLARS
Cranfield 2
SMART
1960
SIGIR
1970
1980
1990
2000
2010
6
7. History of IR Evaluation research
MEDLARS
Cranfield 2
SMART
1960
TREC
SIGIR
1970
1980
1990
INEX
CLEF
NTCIR
2000
2010
6
8. History of IR Evaluation research
MEDLARS
Cranfield 2
SMART
1960
TREC
SIGIR
1970
1980
1990
INEX
CLEF
NTCIR
2000
2010
ISMIR
MIREX
MusiCLEF
MSD Challenge
6
9. History of IR Evaluation research
MEDLARS
Cranfield 2
SMART
1960
TREC
SIGIR
1970
1980
1990
INEX
CLEF
NTCIR
2000
2010
ISMIR
MIREX
MusiCLEF
MSD Challenge
6
10. History of IR Evaluation research
MEDLARS
Cranfield 2
SMART
1960
TREC
SIGIR
1970
1980
1990
INEX
CLEF
NTCIR
2000
2010
ISMIR
MIREX
MusiCLEF
MSD Challenge
6
11. Audio Music Similarity
• Song as input to system, audio signal
• Retrieve songs musically similar to it, by content
• Resembles traditional Ad Hoc retrieval in Text IR
• (most?) Important task in Music IR
– Music recommendation
– Playlist generation
– Plagiarism detection
• Annual evaluation in MIREX
7
14. The two questions
• How good is my system?
– What does good mean?
– What is good enough?
• Is system A better than system B?
– What does better mean?
– How much better?
• Efficiency? Effectiveness? Ease?
10
15. Measure user experience
• We are interested in user-measures
– Time to complete task, idle time, success rate, failure
rate, frustration, ease to learn, ease to use …
– Their distributions describe user experience, fully
• User satisfaction is the bigger picture
– How likely is it that an arbitrary user, with an arbitrary
query (and with an arbitrary document collection) will
be satisfied by the system?
• This is the ultimate goal: the good, the better
11
16. The Cranfield Paradigm
• Estimate user-measure distributions
– Sample documents, queries and users
– Monitor user experience and behavior
– Representativeness, cost, ethics, privacy …
• Fix samples to allow reproducibility
– But cannot fix users and their behavior
– Remove users, but include a static user component,
fixed across experiments: ground truth judgments
– Still need to include the dynamics of the process: user
models behind effectiveness measures and scales
12
17. Test collections
• Our goal is the users:
user-measure = f(system)
• Cranfield measures systems:
system-effectiveness = f(system, measure, scale)
• Estimators of the distributions of user-measures
– Only source of variability is the systems themselves
– Reproducibility becomes easy
– Experiments are inexpensive (collections are not)
– Research becomes systematic
13
18. Validity, Reliability and Efficiency
• Validity: are we measuring what we want to?
– How well are effectiveness and satisfaction correlated?
– How good is good and how better is better?
• Reliability: how repeatable are the results?
– How large do samples have to be?
– What statistical methods should be used?
• Efficiency: how inexpensive is it to get valid and
reliable results?
– Can we estimate results with fewer judgments?
14
19. Goal of this dissertation
Study and improve
the validity, reliability and efficiency
of the methods used to evaluate AMS systems
Additionally, improve meta-evaluation methods
15
21. Outline
• Introduction
• Validity
– System Effectiveness and User Satisfaction
– Modeling Distributions
• Reliability
• Efficiency
• Conclusions and Future Work
17
22. Assumption of Cranfield
• Systems with better effectiveness are perceived
by users as more useful, more satisfactory
• But different effectiveness measures and
relevance scales produce different distributions
– Which one is better to predict user satisfaction?
• Map system effectiveness onto user satisfaction,
experimentally
– If P@10 = 0.2, how likely is it that an arbitrary user will
find the results satisfactory?
– What if DCG@20 = 0.46?
18
36. What can we infer?
• Preference: difference noticed by user
– Positive: user agrees with evaluation
– Negative: user disagrees with evaluation
• Non-preference: difference not noticed by user
– Good: both systems are satisfactory
– Bad: both systems are not satisfactory
21
37. Data
• Queries, documents and judgments from MIREX
• 4115 unique and artificial examples
• 432 unique queries, 5636 unique documents
• Answers collected via Crowdsourcing
– Quality control with trap questions
• 113 unique subjects
22
38. Single system: how good is it?
• For 2045 examples (49%) users could not decide
which system was better
What do we expect?
23
39. Single system: how good is it?
• For 2045 examples (49%) users could not decide
which system was better
23
40. Single system: how good is it?
• Large ℓmin thresholds underestimate satisfaction
24
41. Single system: how good is it?
• Users don’t pay attention to ranking?
25
42. Single system: how good is it?
• Exponential gain underestimates satisfaction
26
43. Single system: how good is it?
• Document utility independent of others
27
44. Two systems: which one is better?
• For 2090 examples (51%) users did prefer one
system over the other one
What do we expect?
28
45. Two systems: which one is better?
• For 2090 examples (51%) users did prefer one
system over the other one
28
46. Two systems: which one is better?
• Large differences needed for users to note them
29
47. Two systems: which one is better?
• More relevance levels are better to discriminate
30
48. Two systems: which one is better?
• Cascade and navigational user models are not
appropriate
31
49. Two systems: which one is better?
• Users do prefer the (supposedly) worse system
32
50. Summary
• Effectiveness and satisfaction are clearly correlated
– But there is a bias of 20% because of user disagreement
– Room for improvement through personalization
• Magnitude of differences does matter
– Just looking at rankings is very naive
• Be careful with statistical significance
– Need Δλ≈0.4 for users to agree with effectiveness
• Historically, only 20% of times in MIREX
• Differences among measures and scales
– Linear gain slightly better than exponential gain
– Informational and positional user models better than
navigational and cascade
– The more relevance levels, the better
33
53. Outline
• Introduction
• Validity
– System Effectiveness and User Satisfaction
– Modeling Distributions
• Reliability
• Efficiency
• Conclusions and Future Work
36
54. Outline
• Introduction
• Validity
– System Effectiveness and User Satisfaction
– Modeling Distributions
• Reliability
• Efficiency
• Conclusions and Future Work
37
55. Evaluate in terms of user satisfaction
• So far, arbitrary users for a single query
– P Sat Ql @5 = 0.61 = 0.7
• Easily for n users and a single query
– P Sat15 = 10 Q l @5 = 0.61 = 0.21
• What about a sample of queries 𝒬?
– Map queries separately for the distribution of P(Sat)
– For easier mappings, P(Sat | λ) functions are
interpolated with simple polynomials
38
56. Expected probability of satisfaction
• Now we can compute point and interval estimates
of the expected probability of satisfaction
• Intuition fails when interpreting effectiveness
39
57. System success
• If P(Sat) ≥ threshold the system is successful
– Setting the threshold was rather arbitrary
– Now it is meaningful, in terms of user satisfaction
• Intuitively, we want the majority of users to find
the system satisfactory
– P Succ = P P Sat > 0.5 = 1 − FP
Sat
(0.5)
• Improving queries for which we are bad is
worthier than further improving those for which
we are already good
40
58. Distribution of P(Sat)
• Need to estimate the cumulative distribution
function of user satisfaction: FP(Sat)
• Not described by a typical distribution family
– ecdf converges, but what is a good sample size?
– Compare with Normal, Truncated Normal and Beta
• Compared on >2M random samples from MIREX
collections, at different query set sizes
• Goodness of fit as to Cramér-von Mises ω2
41
59. Estimated distribution of P(Sat)
• More than ≈25 queries in the collection
– ecdf approximates better
• Less than ≈25 queries in the collection
– Normal for graded scales, ecdf for binary scales
• Beta is always the best with the Fine scale
• The more levels in the relevance scale, the better
• Linear gain better than exponential gain
42
60. Intuition fails, again
• Intuitive conclusions based on effectiveness alone
contradict those based on user satisfaction
– E Δλ = −0.002
– E ΔP Sat
– E ΔP Succ
= 0.001
= 0.07
43
61. Intuition fails, again
• Intuitive conclusions based on effectiveness alone
contradict those based on user satisfaction
– E Δλ = −0.002
– E ΔP Sat
– E ΔP Succ
= 0.001
= 0.07
43
62. Intuition fails, again
• Intuitive conclusions based on effectiveness alone
contradict those based on user satisfaction
– E Δλ = −0.002
– E ΔP Sat
– E ΔP Succ
= 0.001
= 0.07
43
63. Historically, in MIREX
• Systems are not as satisfactory as we thought
• But they are more successful
– Good (or bad) for some kinds of queries
44
66. Outline
• Introduction
• Validity
– System Effectiveness and User Satisfaction
– Modeling Distributions
• Reliability
• Efficiency
• Conclusions and Future Work
47
67. Outline
• Introduction
• Validity
• Reliability
– Optimality of Statistical Significance Tests
– Test Collection Size
• Efficiency
• Conclusions and Future Work
48
68. Random error
• Test collections are just samples from larger,
possibly infinite, populations
• If we conclude system A is better than B, how
confident can we be?
– Δλ 𝒬 is just an estimate of the population mean μΔλ
• Usually employ some statistical significance test
for differences in location
• If it is statistically significant, we have confidence
that the true difference is at least that large
49
69. Statistical hypothesis testing
• Set two mutually exclusive hypotheses
– H0 : μΔλ = 0
– H1 : μΔλ ≠ 0
• Run test, obtain p-value= P μΔλ ≥ Δλ 𝒬 H0
– p ≤ α: statistically significant, high confidence
– p > α: statistically non-significant, low confidence
• Possible errors in the binary decision
– Type I: incorrectly reject H0
– Type II: incorrectly accept H0
50
70. Statistical significance tests
• (Non-)parametric tests
– t-test, Wilcoxon test, Sign test
• Based on resampling
– Bootstrap test, permutation/randomization test
• They make certain assumptions about
distributions and sampling methods
– Often violated in IR evaluation experiments
– Which test behaves better, in practice, knowing that
assumptions are violated?
51
71. Optimality criteria
• Power
– Achieve significance as often as possible (low Type II)
– Usually increases Type I error rates
• Safety
– Minimize Type I error rates
– Usually decreases power
• Exactness
– Maintain Type I error rate at α level
– Permutation test is theoretically exact
52
72. Experimental design
• Randomly split query set in two
• Evaluate all systems with both subsets
– Simulating two different test collections
• Compare p-values with both subsets
– How well do statistical tests agree with themselves?
– At different α levels
• All systems and queries from MIREX 2007-2011
– >15M p-values
53
73. Power and success
• Bootstrap test is the most powerful
• Wilcoxon, bootstrap and permutation are the
most successful, depending on α level
54
74. Conflicts
• Wilcoxon and t-test are the safest at low α levels
• Wilcoxon is the most exact at low α levels, but
bootstrap is for usual levels
55
75. Optimal measure and scale
• Power: CGl@5, GAP@5, DCGl@5 and RBPl@5
• Success: CGl@5, GAP@5, DCGl@5 and RBPl@5
• Conflicts: very similar across measures
• Power: Fine, Broad and binary
• Success: Fine, Broad and binary
• Conflicts: very similar across scales
56
76. Outline
• Introduction
• Validity
• Reliability
– Optimality of Statistical Significance Tests
– Test Collection Size
• Efficiency
• Conclusions and Future Work
57
77. Outline
• Introduction
• Validity
• Reliability
– Optimality of Statistical Significance Tests
– Test Collection Size
• Efficiency
• Conclusions and Future Work
58
78. Acceptable sample size
• Reliability is higher with larger sample sizes
– But it is also more expensive
– What is an acceptable test collection size?
• Answer with Generalizability Theory
– G-Study: estimate variance components
– D-Study: estimate reliability of different sample sizes
and experimental designs
59
90. G-study: variance components
• Fully crossed experimental design: s × q
λq,A = λ + λA + λq + εqA
σ2 =
σ2 + σ2 + σ2
s
q
sq
• Estimated with Analysis of Variance
• If σ2 is small or σ2 is large, we need more queries
s
q
60
91. D-study: variance ratios
• Stability of absolute scores
Φ nq
σ2
s
=
σ2 + σ2
q
e
2
σs +
nq
• Stability of relative scores
Eρ2 nq =
σ2
s
σ2
σ2 + e
s
nq
• We can easily estimate how many queries are
needed to reach some level of stability (reliability)
61
92. D-study: variance ratios
• Stability of absolute scores
Φ nq
σ2
s
=
σ2 + σ2
q
e
2
σs +
nq
• Stability of relative scores
Eρ2 nq =
σ2
s
σ2
σ2 + e
s
nq
• We can easily estimate how many queries are
needed to reach some level of stability (reliability)
61
93. Effect of query set size
•
•
•
•
Average absolute stability Φ = 0.97
≈65 queries needed for Φ2 = 0.95, ≈100 in worst cases
Fine scale slightly better than Broad and binary scales
RBPl@5 and nDCGl@5 are the most stable
62
94. Effect of query set size
•
•
•
•
Average relative stability Eρ2 = 0.98
≈35 queries needed for Eρ2 = 0.95, ≈60 in worst cases
Fine scale better than Broad and binary scales
CGl@5 and RBPl@5 are the most stable
63
95. Effect of cutoff k
• What if we use a deeper cutoff, k=10?
– From 100 queries and k=5 to 50 queries and k=10
– Should still have stable scores
– Judging effort should decrease
– Rank-based measures should become more stable
• Tested in MIREX 2012
– Apparently in 2013 too
64
96. Effect of cutoff k
• Judging effort reduced to 72% of the usual
• Generally stable
– From Φ = 0.81 to Φ = 0.83
– From Eρ2 = 0.93 to Eρ2 = 0.95
65
97. Effect of cutoff k
• Reliability given a fixed budged for judging?
– k=10 allows us to use fewer queries, about 70%
– Slightly reduced relative stability
66
98. Effect of assessor set size
• More assessors or simply more queries?
– Judging effort is multiplied
• Can be studied with MIREX 2006 data
– 3 different assessors per query
– Nested experimental design: s × h: q
67
99. Effect of assessor set size
•
2
2
Broad scale: σs ≈ σh:q
Fine scale: σ2 ≫ σ2
s
h:q
•
• Always better to spend resources on queries
68
100. Summary
• MIREX collections generally larger than necessary
• For fixed budget
– More queries better than more assessors
– More queries slightly better than deeper cutoff
• Worth studying alternative user model?
•
•
•
•
Employ G-Theory while building the collection
Fine better than Broad, better than binary
CGl@5 and DCGl@5 best for relative stability
RBPl@5 and nDCGl@5 best for absolute stability
69
101. Outline
• Introduction
• Validity
• Reliability
– Optimality of Statistical Significance Tests
– Test Collection Size
• Efficiency
• Conclusions and Future Work
70
103. Probabilistic evaluation
• The MIREX setting is still expensive
– Need to judge all top k documents from all systems
– Takes days, even weeks sometimes
• Model relevance probabilistically
• Relevance judgments are random variables over
the space of possible assignments of relevance
• Effectiveness measures are also probabilistic
72
104. Probabilistic evaluation
• Accuracy increases as we make judgments
– E R d ← rd
• Reliability increases too (confidence)
– Var R d ← 0
• Iteratively estimate relevance and effectiveness
– If confidence is low, make judgments
– If confidence is high, stop
• Judge as few documents as possible
73
105. Learning distributions of relevance
• Uniform distribution is very uninformative
• Historical distribution in MIREX has high variance
• Estimate from a set of features: P R d = ℓ θd
– For each document separately
– Ordinal Logistic Regression
• Three sets of features
– Output-based, can always be used
– Judgment-based, to exploit known judgments
– Audio-based, to exploit musical similarity
74
106. Learned models
• Mout : can be used even without judgments
– Similarity between systems’ outputs
– Genre and artist metadata
• Genre is highly correlated to similarity
– Decent fit, R2 ≈ 0.35
• Mjud : can be used when there are judgments
– Similarity between systems’ outputs
– Known relevance of same system and same artist
• Artist is extremely correlated to similarity
– Excellent fit, R2 ≈ 0.91
75
107. Estimation errors
• Actual vs. predicted by Mout
– 0.36 with Broad and 0.34 with Fine
• Actual vs. predicted by Mjud
– 0.14 with Broad and 0.09 with Fine
• Among assessors in MIREX 2006
– 0.39 with Broad and 0.31 with Fine
• Negligible under the current MIREX setting
76
110. Probabilistic effectiveness measures
• Effectiveness scores are also random variables
• Different approaches to compute estimates
– Deal with dependence of random variables
– Different definitions of confidence
• For measures based on ideal ranking (nDCGl@k
and RBPl@k) we do not have a closed form
– Approximated with Delta method and Taylor series
79
111. Ranking without judgments
1. Estimate relevance with Mout
2. Estimate relative differences and rank systems
• Average confidence in the rankings is 94%
• Average accuracy of the ranking is 92%
80
112. Ranking without judgments
• Can we trust individual estimates?
– Ideally, we want X% accuracy when X% confidence
– Confidence slightly overestimated in [0.9, 0.99)
Confidence
[0.5, 0.6)
[0.6, 0.7)
[0.7, 0.8)
[0.8, 0.9)
[0.9, 0.95)
[0.95, 0.99)
[0.99, 1)
E[Accuracy]
DCGl@5
Broad
In bin
Accuracy
23 (6.5%)
0.826
14 (4%)
0.786
14 (4%)
0.571
22 (6.2%)
0.864
23 (6.5%)
0.87
24 (6.8%)
0.917
232 (65.9%)
0.996
0.938
Fine
In bin
22 (6.2%)
16 (4.5%)
11 (3.1%)
21 (6%)
19 (5.4%)
27 (7.7%)
236 (67%)
Accuracy
0.636
0.812
0.364
0.762
0.895
0.926
0.996
0.921
81
113. Relative estimates with judgments
1. Estimate relevance with Mout
2. Estimate relative differences and rank systems
3. While confidence is low (<95%)
1. Select a document and judge it
2. Update relevance estimates with Mjud when possible
3. Update estimates of differences and rank systems
• What documents should we judge?
– Those that are the most informative
– Measure-dependent
82
114. Relative estimates with judgments
• Judging effort dramatically reduced
– 1.3% with CGl@5, 9.7% with RBPl@5
• Average accuracy still 92%, but improved individually
– 74% of estimates with >99% confidence, 99.9% accurate
– Expected accuracy improves slightly from 0.927 to 0.931
83
115. Absolute estimates with judgments
1. Estimate relevance with Mout
2. Estimate absolute effectiveness scores
3. While confidence is low (expected error >±0.05)
1. Select a document and judge it
2. Update relevance estimates with Mjud when possible
3. Update estimates of absolute effectiveness scores
• What documents should we judge?
– Those that reduce variance the most
– Measure-dependent
84
116. Absolute estimates with judgments
• The stopping condition is overly confident
– Virtually no judgments are even needed (supposedly)
• But effectiveness is highly overestimated
– Especially with nDCGl@5 and RBPl@5
– Mjud, and especially Mout, tend to overestimate relevance
85
117. Absolute estimates with judgments
• Practical fix: correct variance
• Estimates are better, but at the cost of judging
– Need between 15% and 35% of judgments
86
118. Summary
• Estimate ranking of systems with no judgments
– 92% accuracy on average, trustworthy individually
– Statistically significant differences are always correct
• If we want more confidence, judge documents
– As few as 2% needed to reach 95% confidence
– 74% of estimates have >99% confidence and accuracy
• Estimate absolute scores, judging as necessary
– Around 25% needed to ensure error <0.05
87
121. Validity
• Cranfield tells us about systems, not about users
• Provide empirical mapping from system
effectiveness onto user satisfaction
• Room for personalization quantified in 20%
• Need large differences for users to note them
• Consider full distributions, not just averages
• Conclusions based on effectiveness tend to
contradict conclusions based on user satisfaction
90
122. Reliability
• Different significance tests for different needs
– Bootstrap test is the most powerful
– Wilcoxon and t-test are the safest
– Wilcoxon and bootstrap test are the most exact
•
•
•
•
•
Practical interpretation of p-values
MIREX collections generally larger than needed
Spend resources on queries, not on assessors
User models with deeper cutoffs are feasible
Employ G-Theory while building collections
91
123. Efficiency
•
•
•
•
•
Probabilistic evaluation reduces cost, dramatically
Two models to estimate document relevance
System rankings 92% accurate without judgments
2% of judgments to reach 95% confidence
25% of judgments to reduce error to 0.05
92
124. Measures and scales
• Best measure and scale depends on situation
• But generally speaking
– CGl@5, DCGl@5 and RBPl@5
– Fine scale
– Model distributions as Beta
93
127. Validity
• User studies to understand user behavior
• What information to include in test collections
• Other forms of relevance judgment to better
capture document utility
• Explicitly define judging guidelines
• Similar mapping for Text IR
96
128. Reliability
• Corrections for Multiple Comparisons
• Methods to reliably estimate reliability while
building test collections
97
129. Efficiency
• Better models to estimate document relevance
• Correct variance when having just a few
relevance judgments available
• Estimate relevance beyond k=5
• Other stopping conditions and document weights
98