1. Chapter 3 Retrieval Evaluation
Hsin-Hsi Chen
Department of Computer Science and Information Engineering
National Taiwan University
Hsin-Hsi Chen
3-1
2. Evaluation
• Function analysis
• Time and space
– The shorter the response time, the smaller the space
used, the better the system is
• Performance evaluation (for data retrieval)
–
–
–
–
Performance of the indexing structure
The interaction with the operating systems
The delays in communication channels
The overheads introduced by software layers
• Performance evaluation (for information retrieval)
– Besides time and space, retrieval performance is an
issue
Hsin-Hsi Chen
3-2
3. Retrieval Performance Evaluation
• Retrieval task
– Batch mode
• The user submits a query and receives an answer back
• How the answer set is generated
– Interactive mode
• The user specifies his information need through a series of
interactive steps with the system
• Aspects
–
–
–
–
User effort
characteristics of interface design
guidance provided by the system
duration of the session
Hsin-Hsi Chen
3-3
4. Recall and Precision
• Recall
| Ra |
|R|
– the fraction of the relevant documents which
has been retrieved
• Precision
| Ra |
| A|
– the fraction of the retrieved documents which is
relevant
Relevant Docs
in Answer Set
|Ra|
Relevant Docs
Hsin-Hsi Chen |R|
collection
Answer Set
|A|
3-4
5. precision versus recall curve
• The user is not usually presented with all the
documents in the answer set A at once
• Example
Rq={d3,d5,d9,d25,d39,d44,d56,d71,d89,d123}
(100%,10%)
(precision, recall)
Ranking for query q by a retrieval algorithm
1. d123 •
6. d9 •
11. d38
2. d84
7. d511
12. d48
3. d56 •
8. d129
13. d250
4. d6
9. d187
14. d113
5. d8
10. d25 • 15. d3 •
(66%,20%)
Hsin-Hsi Chen
(50%,30%)
(40%,40%)
(33%,50%)
3-5
6. 11 standard recall levels
for a query
• precision versus recall based on 11 standard
recall levels: 0%, 10%, 20%, …, 100%
p
r
e
c
i
s
i
o
n
interpolation
120
100
80
60
40
20
0
20
Hsin-Hsi Chen
40
60
recall
80
100
120
3-6
7. 11 standard recall levels
for several queries
• average the precision figures at each recall
level
Nq
Pi (r )
P(r ) = ∑
i =1 Nq
• P(r): the average precision at the recall level r
• Nq: the number of queries used
• Pi(r): the precision at recall level r for the i-th
query
Hsin-Hsi Chen
3-7
9. interpolation procedure
• rj (j ∈ {0,1,2,…,10}): a reference to the j-th standard
recall level (e.g., r5 references to the recall level
50%)
d56 • (33.3%,33.3%)
• P(rj)=max rj≤r≤rj+1P(r)
d129 • (25%,66.6%)
d3 • (20%,100%)
• Example
r0: (33.33%,0%)
r3: (33.33%,30%)
r6: (25%,60%)
r9: (20%,90%)
Hsin-Hsi Chen
r1: (33.33%,10%)
r4: (25%,40%)
r7: (20%,70%)
r10: (20%,100%)
r2: (33.33%,20%)
r5: (25%,50%)
r8: (20%,80%)
interpolated precision
3-9
10. Precision versus recall figures
compare the retrieval performance of distinct retrieve
algorithms over a set of example queries
• The curve of precision versus recall which results
from averaging the results for various queries
100
p 90
80
r
e 70
c 60
i 50
s 40
i 30
o
20
n
10
0
20
Hsin-Hsi Chen
40
60
recall
80
100
3-10
120
11. Average Precision at given
Document Cutoff Values
• Compute the average precision when 5, 10,
15, 20, 30, 50 or 100 relevant documents
have been seen.
• Provide additional information on the
retrieval performance of the ranking
algorithm
Hsin-Hsi Chen
3-11
12. Single Value Summaries
compare the retrieval performance of a retrieval algorithm for
individual queries
• Average precision at seen relevant documents
– Generate a single value summary of the ranking by
averaging the precision figures obtained after each new
relevant document is observed
– Example
1. d123 •(1)
6. d9 • (0.5) 11. d38
2. d84
7. d511
12. d48
3. d56 • (0.66) 8. d129
13. d250
4. d6
9. d187
14. d113
5. d8
10. d25 • (0.4)15. d3 • (0.33)
(1+0.66+0.5+0.4+0.33)/5=0.57
Favor
Hsin-Hsi Chen systems which retrieve relevant documents quickly
3-12
13. Single Value Summaries
(Continued)
• Reciprocal Rank (RR)
– Equals to precision at the 1st retrieved relevant
document
– Useful for tasks need only 1 relevant document
ex: Question & Answering
• Mean Reciprocal Rank (MRR)
– The mean of RR over several queries
Hsin-Hsi Chen
3-13
14. Single Value Summaries
(Continued)
• R-Precision
– Generate a single value summary of ranking by
computing the precision at the R-th position in the
ranking, where R is the total number of relevant
documents for the current query
1. d123 •
6. d9 •
2. d84
7. d511
3. d56 •
8. d129
4. d6
9. d187
5. d8
10. d25 •
R=10 and # relevant=4
R-precision=4/10=0.4
Hsin-Hsi Chen
2.
1.
2.
d123
d84
3.
56 •
R=3 and # relevant=1
R-precision=1/3=0.33
3-14
15. Single Value Summaries
(Continued)
• Precision Histograms
– A R-precision graph for several queries
– Compare the retrieval history of two algorithms
RPA / B (i ) = RPA (i ) − RPB (i )
where RPA (i ) and RPB (i ) are R − precision values of
retrieval a lg orithms A and B for the i − th query
– RPA/B=0: both algorithms have equivalent performance
for the i-the query
– RPA/B>0: A has better retrieval performance for query i
– RPA/B<0: B has better retrieval performance for query i
Hsin-Hsi Chen
3-15
17. Summary Table Statistics
• Statistical summary regarding the set of all the
queries in a retrieval task
– the number of queries used in the task
– the total number of documents retrieved by all queries
– the total number of relevant documents which were
effectively retrieved when all queries are considered
– the total number of relevant documents which could
have been retrieved by all queries
– …
Hsin-Hsi Chen
3-17
18. Precision and Recall
Appropriateness
• Estimation of maximal recall requires knowledge
of all the documents in the collection
• Recall and precision capture different aspects of
the set of retrieved documents
• Recall and precision measure the effectiveness
over queries in batch mode
• Recall and precision are defined under the
enforcement of linear ordering of the retrieved
documents
Hsin-Hsi Chen
3-18
19. The Harmonic Mean
• harmonic mean F(j) of recall and precision
F ( j) =
2
1
1
+
R( j ) P( j )
• R(j): the recall for the j-th document in the ranking
• P(j): the precision for the j-th document in the
ranking
2× P × R
F=
P+R
Hsin-Hsi Chen
3-19
21. The E Measure
• E evaluation measure
– Allow the user to specify whether he is more
interested in recall or precision
E( j) = 1 −
1 + b2
b2
1
+
R( j ) P( j )
( β + 1) × P × R
F=
2
β ×P+R
2
Hsin-Hsi Chen
3-21
22. User-oriented measures
• Basic assumption of previous evaluation
– The set of relevant documents for a query is the
same, independent of the user
• User-oriented measures
–
–
–
–
coverage ratio
novelty ratio
relative recall
recall effort
Hsin-Hsi Chen
3-22
23. | Rk |
cov erage =
|U |
high coverage ratio: system finds most of the relevant
documents the user expected to see
| Ru |
high novelty ratio: the system reveals many new
novelty =
relevant documents which were
| Ru | + | Rk |
previously unknown
Relevant Docs |R|
relative recall=
| Rk | + | Ru |
|U |
Answer Set |A| (proposed by system)
recall effort:
Relevant Docs
known to the user |U|
Relevant Docs
known to the User
which were retrieved |Rk|
Hsin-Hsi Chen
# of relevant docs
the user expected
to find/# of docs
examined to find
the expected
relevant docs
Relevant Docs
previously unknown to the
user which were retrieved |Ru|
3-23
24. A More Modern Relevance
Metric for Web Search
• Normalized Discounted Cumulated Gain (NDCG)
– K. Jaervelin and J. Kekaelaeinen (TOIS 2002)
– Gain: relevance of a document is no more binary
– Sensitive to the position of highest rated
documents
• Log-discounting of gains according to the positions
– Normalize the DCG with the “ideal set” DCG.
Hsin-Hsi Chen
3-24
25. NDCG Example
• Assume that the relevance scores 0 – 3 are used.
G’=<3, 2, 3, 0, 0, 1, 2, 2, 3, 0, …>
• Cumulated Gain (CG)
G[1], if i = 1
CG[i ] =
CG[i − 1] + G[i ], otherwise
CG’=<3, 5, 8, 8, 8, 9, 11, 13, 16, 16, …>
Hsin-Hsi Chen
3-25
32. TREC ~簡介
• TREC: Text REtrieval Conference
• 主辦 : NIST 及 DARPA ,為 TIPSTER 文件計劃之子計
劃之一
• Leader: Donna Harman (Manager of The Natural Language
Processing and Information Retrieval Group of the
Information Access and User Interfaces Division, NIST)
• 文件集
– 5GB 以上
– 數百萬篇文件
Hsin-Hsi Chen
3-32
33. History
•
•
•
•
TREC-1 (Text Retrieval Conference) Nov 1992
TECC-2 Aug 1993
TREC-3
TREC-7
January 16, 1998 -- submit application to NIST.
Beginning February 2 -- document disks distributed to those new
participants who have returned the required forms.
June 1 -- 50 new test topics for ad hoc task distributed
August 3 -- ad hoc results due at NIST
September 1 -- latest track submission deadline.
September 4 -- speaker proposals due at NIST.
October 1 -- relevance judgments and individual evaluation
scores due back to participants
Nov. 9-11-- TREC-7 conference at NIST in Gaithersburg, Md.
TREC-8 (1999) TREC-9 (2000) TREC-10 (2001) …
Hsin-Hsi Chen
3-33
34. The Test Collection
• the documents
• the example information requests (called
topics in TREC)
• the relevant judgments (right answers)
Hsin-Hsi Chen
3-34
35. The Documents
• Disk 1 (1GB)
–
–
–
–
–
WSJ: Wall Street Journal (1987, 1988, 1989) 華爾街日報
AP: AP Newswire (1989) 美聯社
ZIFF: Articles from Computer Select disks (Ziff-Davis Publishing)
FR: Federal Register (1989) 美國聯邦政府公報
DOE: Short abstracts from DOE publications
• Disk2 (1GB)
–
–
–
–
WSJ: Wall Street Journal (1990, 1991, 1992)
AP: AP Newswire (1988)
ZIFF: Articles from Computer Select disks
FR: Federal Register (1988)
Hsin-Hsi Chen
3-35
36. The Documents (Continued)
• Disk 3 (1 GB)
–
–
–
–
SJMN: San Jose Mercury News (1991) 聖荷西水星報
AP: AP Newswire (1990)
ZIFF: Articles from Computer Select disks
PAT: U.S. Patents (1993)
• Statistics
– document lengths
DOE (very short documents) vs. FR (very long documents)
– range of document lengths
AP (similar in length) vs. WSJ and ZIFF (wider range of lengths)
Hsin-Hsi Chen
3-36
37. TREC 文件集
Volume
1
2
3
4
5
Routing
Test
Data
DOE (very short documents) vs. FR (very long documents)
AP (similar in length) vs. WSJ and ZIFF (wider range of lengths)
Revised
Sources
Wall Street Journal, 1978-1989
Associated Press newswire, 1989
March
Computer Selects Articles, Ziff-Davis
1994
Federal Register, 1989
Abstracts of U.S. DOE publications
Wall Street Journal, 1990-1992(WSJ)
March Associated Press newswire(1988)(AP)
1994
Computer Selects articles, Ziff-Davis(ZIFF)
Federal Register(1988)(FR88)
San Jose Mercury News, 1991
March Associated Press newswire, 1990
1994
Computer Selects articles, Ziff-Davis
U.S. patents, 1993
The Financial Times, 1991-1994(FT)
May 1996 Federal Register, 1994(FR94)
Congressional Record, 1993(CR)
April Foreign Broadcast Information Service(FBIS)
1997
Los Angeles Times (1989, 1990)
Foreign Broadcast Information Service(FBIS)
Hsin-Hsi Chen
Size
(M B )
Docs
267
254
242
260
184
242
237
175
209
287
237
345
243
564
395
235
470
475
98,732
84,678
75,180
25,960
226,087
74,520
79,919
56,920
19,860
90,257
78,321
161,021
6,711
210,158
55,630
27,922
130,471
131,896
245
446
200
391
111
301
438
182
396
379
451
122
4445
316
588
288
322
351
434.0
473.9
473.0
1315.9
120.4
508.4
468.7
451.9
1378.1
453.0
478.4
295.4
5391.0
412.7
644.7
1373.5
543.6
526.5
490 120,653
348
581.3
Median #
Mean #
Terms/Doc Terms/Doc
3-37
38. Document Format
(in Standard Generalized Mark-up Language, SGML)
<DOC>
<DOCNO>WSJ880406-0090</DOCNO>
<HL>AT&T Unveils Services to Upgrade Phone Networks Under Global Plan </HL>
<AUTHOR>Janet Guyon (WSJ staff) </AUTHOR>
<DATELINE>New York</DATELINE>
<TEXT>
American Telephone & Telegraph Co. introduced the first of a new generation of
phone services with broad implications for computer and communications
.
.
</TEXT>
</DOC>
Hsin-Hsi Chen
3-38
39. TREC 之文件標示
<DOC>
<DOCN0>FT911-3</DOCN0>
<PROFILE>AN-BE0A7AAIFT</PROFILE>
<DATE>910514
</DATE>
<HEADLINE>
FT 14 MAY 91 / International Company News: Contigas plans DM900m east German project
</HEADLINE>
<BYLINE>
By DAVID GOODHART
</BYLINE>
<DATELINE>
BONN
</DATELINE>
<TEXT>
CONTIGAS, the German gas group 81 per cent owned by the utility Bayernwerk, said yesterday that it intends to
invest DM900m (Dollars 522m) in the next jour years to build a new gas distribution system in the east German state of
Thuringia. …
</TEXT>
</DOC>
Hsin-Hsi Chen
3-39
40. The Topics
• Issue 1
– allow a wide range of query construction methods
– keep the topic (user need) distinct from the query (the
actual text submitted to the system)
• Issue 2
– increase the amount of information available about
each topic
– include with each topic a clear statement of what
criteria make a document relevant
• TREC
– 50 topics/year, 400 topics (TREC1~TREC7)
Hsin-Hsi Chen
3-40
41. Sample Topics used in TREC-1 and TREC-2
<top>
<head>Tipster Topic Description
<num>Number: 066
<dom>Domain: Science and Technology
<title>Topic: Natural Language Processing
<desc>Description: (one sentence description)
Document will identify a type of natural language processing technology which
is being developed or marketed in the U.S.
<narr>Narrative: (complete description of document relevance for assessors)
A relevant document will identify a company or institution developing or
marketing a natural language processing technology, identify the technology,
and identify one or more features of the company’s product.
<con>Concepts: (a mini-knowledge base about topic such as a real searcher
1. natural language processing
might possess)
2. translation, language, dictionary, font
3. software applications
Hsin-Hsi Chen
3-41
42. <fac> Factors (allow easier automatic query building by listing specific
<nat> Nationality: U.S.
items from the narrative that
</fact>
constraint the documents that
<def>Definition(s):
are relevant)
</top>
Hsin-Hsi Chen
3-42
43. TREC-1 and TREC-2 查詢主題
<top>
<head> Tipster Topic Description
<num> Number: 037
<dom> Domain: Science and Technology
<title> Topic: Identify SAA components
<desc> Description:
Document identifies software products which adhere to IBM's SAA standards.
<narr> Narrative:
To be relevant, a document must identify a piece of software which is considered a Systems Application Architectural
(SAA) component or one which conforms to SAA.
<con> Concept(s):
1. SAA
2. OfficeVision
3. IBM
4. Standards, Interfaces, Compatibility
<fac> Factor(s):
<def> Definition(s):
OfficeVision - A series of integrated office automation applications from IBM that runs across all of its major coputer
families.
Systems Application Architecture (SAA) - A set of IBM standards that provide consistent user interfaces, programming
interfaces, and communications protocols among all IBM computers from micro to mainframe.
</top>
Hsin-Hsi Chen
3-43
44. TREC-3 查詢主題
<top>
<num> Number: 177
<title> Topic: English as the Official Language in U.S.
<desc> Description:
Document will provide arguments supporting the making of English the standard language of the U.S.
<narr> Narrative:
A relevant document will note instances in which English is favored as a standard language. Examples are the
positive results achieved by immigrants in the areas of acceptance, greater economic opportunity, and increased
academic achievement. Reports are also desired which describe some of the language difficulties encountered by
other nations and groups of nations, e.g., Canada, Belgium, European Community, when they have opted for the use of
two or more languages as their official means of communication. Not relevant are reports which promote
bilingualism or multilingualism.
</top>
Hsin-Hsi Chen
3-44
45. Sample Topics used in TREC-3
<num>Number: 168
<title>Topic: Financing AMTRAK
<desc>Description:
A document will address the role of the Federal Government in financing
the operation of the National Railroad Transportation Corporation (AMTRAK)
<narr>Narrative:A relevant document must provide information on the
government’s responsibility to make AMTRAK an economically viable entity.
It could also discuss the privatization of AMTRAK as an alternative to
continuing government subsides. Document comparing government subsides
given to air and bus transportation with those provided to AMTRAK would also
be relevant.
Hsin-Hsi Chen
3-45
46. Features of topics in TREC-3
•
•
•
•
The topics are shorter.
The topics miss the complex structure of the earlier topics.
The concept field has been removed.
The topics were written by the same group of users that did
assessments.
• Summary:
– TREC-1 and 2 (1-150): suited to the routing task
– TREC-3 (151-200): suited to the ad-hoc task
Hsin-Hsi Chen
3-46
47. TREC-4 查 詢主題
<top>
<num> Number: 217
<desc> Description:
Reporting on possibility of and search for extra-terrestrial life/intelligence.
</top>
TREC-4 只留下主題欄位, TREC-5 將查詢主題調整回 TREC-3
相似結構,但平均長度較短。
Hsin-Hsi Chen
3-47
50. The Relevance Judgments
• For each topic, compile a list of relevant documents.
• approaches
– full relevance judgments (impossible)
judge over 1M documents for each topic, result in 100M judgments
– random sample of documents (insufficient relevance sample)
relevance judgments done on the random sample only
– TREC approach (pooling method)
make relevance judgments on the sample of documents selected by
various participating systems
assumption: the vast majority of relevant documents have been found and
that documents that have not been judged can be assumed to be no
relevant
• pooling method
– Take the top 100 documents retrieved by each system for a given topic.
– Merge them into a pool for relevance assessment.
– The sample is given to human assessors for relevance judgments.
Hsin-Hsi Chen
3-50
53. Overlap of Submitted Results
unique
TREC-1 (TREC-2): top 100 documents for each run (33 runs & 40 runs)
TREC-3: top 100 (200) documents for each run (48 runs)
After pooling, each topic was judged by a single assessor to insure the best
consistency of judgment.
TREC-1 和 TREC-2 runs 的個數差 7 個,檢索所得的 unique documents 個數
(39% vs. 28%) 差異不大,經人判定相關的文件數目差異也不大 (22% vs.
19%) 。
TREC-3 提供判斷的文件取兩倍大, unique 部份差異不大 (21% vs. 20%) ,經
經人判定相關的文件數目差異也不大 (15% vs. 10%) 。 3-53
Hsin-Hsi Chen
56. TREC ~評比
Tasks/Tracks
TREC1
TREC2
TREC3 TREC4 TREC5 TREC6 TREC7
Routing
Main Tasks
Adhoc
Confusion
Confusion
Spoken Document
Retrieval
Database Merging
Filtering
High Precision
Interactive
Cross Language
Multilingual
Spanish
Chinese
Natural Language Processing
Query
Very Large Corpus
Hsin-Hsi Chen
3-56
57. TREC-7
• Ad hoc task
– Participants will receive 5 gigabytes of data for use in training
their systems.
– The 350 topics used in the first six TREC workshops and the
relevance judgments for those topics will also be available.
– The 50 new test topics (351-400) will be distributed in June and
will be used to search the document collection consisting of the
documents on TREC disks 4 and 5.
– Results will be submitted to NIST as the ranked top 1000
documents retrieved for each topic.
Hsin-Hsi Chen
3-57
58. TREC-7 (Continued)
• Track tasks
– Filtering Track
• A task in which the topics are stable (and some relevant
documents are known) but there is a stream of new documents.
• For each document, the system must make a binary decision as
to whether the document should be retrieved (as opposed to
forming a ranked list).
– Cross-Language Track
• An ad hoc task in which some documents are in English, some
in German, and others in French.
• The focus of the track will be to retrieve documents that
pertain to the topic regardless of language.
Hsin-Hsi Chen
3-58
59. TREC-7 (Continued)
• High Precision User Track
– An ad hoc task in which participants are given five minutes per
topic to produce a retrieved set using any means desired (e.g.,
through user interaction, completely automatically).
• Interactive Track
– A task used to study user interaction with text retrieval systems.
• Query Track
– A track designed to foster research on the effects of query
variability and analysis on retrieval performance.
– Participants each construct several different versions of existing
TREC topics, some versions as natural language topics and some
as structured queries in a common format.
– All groups then run all versions of the topics.
Hsin-Hsi Chen
3-59
60. TREC-7 (Continued)
• Spoken Document Retrieval Track
– An ad hoc task that investigates a retrieval system's ability to
retrieve spoken document (recordings of speech).
• Very Large Corpus (VLC)
– An ad hoc task that investigates the ability of retrieval systems to
handle larger amounts of data. The current target corpus size is
approximately 100 gigabytes.
Hsin-Hsi Chen
3-60
61. Categories of Query Construction
• AUTOMATIC
completely automatic initial query construction
• MANUAL
manual initial construction
• INTERACTIVE
use of interactive techniques to construct the queries
Hsin-Hsi Chen
3-61
62. Levels of Participation
• Category A: full participation
• Category B:
full participation using a reduced database
• Category C: evaluation only
• submit up to two runs for routing task, the adhoc
task, or both
• send in the top 1000 documents retrieved for each
topic for evaluation
Hsin-Hsi Chen
3-62
64. TREC-6
Apple Computer
AT&T Labs Research
Australian National Univ.
Carnegie Mellon Univ.
CEA (France)
Center for Inf. Res., Russia
Duke Univ./Univ. of Colorado/Bellcore
ETH (Switzerland)
FS Consulting, Inc.
GE Corp./Rutgers Univ.
George Mason Univ./NCR Corp
Harris Corp.
IBM T.J. Waston Res. (2 groups)
ISS (Singapore)
ITI (Singapore)
APL, Johns Hopkins Univ.
……………
Hsin-Hsi Chen
3-64
65. Evaluation Measures at TREC
• Summary table statistics
– The number of topics used in the task
– The number of documents retrieved over all topics
– The number of relevant documents which were
effectively retrieved for all topics
• Recall-precision averages
• Document level averages
– Average precision at specified document cutoff values
(e.g., 5, 10, 20, 100 relevant documents)
• Average precision histogram
Hsin-Hsi Chen
3-65
71. NTCIR ~評比
• Ad-hoc Information Retrieval Task
• Cross-lingual Information Retrieval Task
– 利用日文查詢主題檢索英文文件
– 共有 21 個查詢主題,其相關判斷包括英文文件與日文
文件
– 系統可選擇自動或人工建立查詢問題
– 系統需送回前 1000 篇檢索結果
• Automatic Term Extraction and Role Analysis Task
– Automatic Term Extraction :從題名與摘要中抽取出
technical terms
– Role Analysis Task
Hsin-Hsi Chen
3-71
72. NTCIR Workshop 2
• organizers
– Hsin-Hsi Chen (Chinese IR track)
– Noriko Kando (Japanese IR track)
– Sung-Hyon Myaeng (Korean IR track)
• Chinese test collection
– developer: Professor Kuang-hua Chen (LIS,
NTU)
– Document collection: 132,173 news stories
– Topics: 50
Hsin-Hsi Chen
3-72
73. NTCIR 2 schedule
• Someday in April, 2000: Call for Participation
• May or later: Training set will be distributed
• August, 2000: Test Documents and Topics will be
distributed.
• Sept.10-30, 2000: Results submission
• Jan., 2001: Evaluation results will be distributed.
• Feb. 1, 2001: Paper submission for working notes
• Feb. 19-22, 2001 (or Feb. 26-March 1): Workshop
(in Tokyo)
• March, 2001: Proceedings
Hsin-Hsi Chen
3-73
74. IREX ~簡介
• IREX: Information Retrieval and Extraction Exercise
• 主辦 : Information Processing Society of Japan
• 參加者 : 約 20 隊 ( 或以上 )
• 預備測試:利用 BMIR-J2 測試集中之查詢主題
• 文件集
– 每日新聞 , 1994-1995
– 參加者必須購買新聞語料
Hsin-Hsi Chen
3-74
81. BMIR-J2 ~查詢主題
Q: F=oxoxo: “Utilizing solar energy”
Q: N-1: Retrieve texts mentioning user of solar energy
Q: N-2: Include texts concerning generating electricity and drying
things with solar heat.
• 查詢主題的分類
– 目的 : 標明該測試主題的特性 , 以利系統選擇
– 標記 : o(necessary), x(unnecessary)
– 類別
•
•
•
•
•
The basic function
The numeric range function
The syntactic function
The semantic function
The world knowledge function:
Hsin-Hsi Chen
3-81
82. AMARYLLIS ~簡介
• 主辦: INIST (INstitute of Information Scientific
and Technique)
• 參加者 : 約近 10 隊
• 文件集
– 新聞文件 : The World, 共 2 萬餘篇
– Pascal(1984-1995) 及 Francis(1992-1995) 資料中抽取
出來的文件題名與摘要部分,共 30 餘萬篇
Hsin-Hsi Chen
3-82
86. An Evaluation of Query Processing Strategies
Using the Tipster Collection
(SIGIR 1993: 347-355)
James P. Callan and W. Bruce Croft
Hsin-Hsi Chen
3-86
87. INQUERY Information Retrieval System
• Documents are indexed by the word stems and numbers
that occur in the text.
• Documents are also indexed automatically by a small
number of features that provide a controlled indexing
vocabulary.
• When a document refers to a company by name, the
document is indexed by the company name and the feature
#company.
• INQUERY includes company, country, U.S. city, number
and date, and person name recognizer.
Hsin-Hsi Chen
3-87
88. INQUERY Information Retrieval System
• feature operators
#company operator matches the #company feature
• proximity operators
require their arguments to occur either in order, within
some distance of each other, or within some window
• belief operators
use the maximum, sum, or weighted sum of a set of beliefs
• synonym operators
• Boolean operators
Hsin-Hsi Chen
3-88
89. Query Transformation in INQUERY
•
•
•
•
Discard stop phrases.
Recognize phrases by stochastic part of speech tagger.
Look for word “not” in the query.
Recognize proper names by assuming that a sequence of
capitalized words is a proper name.
• Introduce synonyms by a small set of words that occur in
the Factors field of TIPSTER topics.
• Introduce controlled vocabulary terms (feature operators).
Hsin-Hsi Chen
3-89
90. Techniques for Creating Ad Hoc Queries
• Simple Queries (description-only approach)
– Use the contents of Description field of TIPSTER topics only.
– Explore how the system behaves with the very short queries.
• Multiple Sources of Information (multiple-field approach)
– Use the contents of the Description, Title, Narrative, Concept(s)
and Factor(s) fields.
– Explore how a system might behave with an elaborate user
interface or very sophisticated query processing
• Interactive Query Creation
– Automatic query creation followed by simple manual
modifications.
– Simulate simple user interaction with the query processing.
Hsin-Hsi Chen
3-90
91. Simple Queries
• A query is constructed automatically by employing all the
query processing transformations on Description field.
• The remaining words and operators are enclosed in a
weighted sum operator.
• 11-point average precision
Hsin-Hsi Chen
3-91
93. Multiple Sources of Information
+phrases
•
(all fields)Q-1: Created automatically, using T, D, N, C and F fields.
-synonym Everything except the synonym and concept operators was
-concept discarded from the the Narrative field. (baseline model)
•
-phrases Q-3: The same as Q-1, except that recognition of phrases
-proper and proper names was disabled. (words-only query)
Names
To determine whether phrase and proximity operators were helpful.
(all fields)
• Q-4: The same as Q-1, except that recognition of phrases
+phrases was applied to the Narrative field.
(narrative To determine whether the simple query processing transformation
Field)
would be effective on the abstract descriptions in the Narrative field.
-phrases
(other fields)
Hsin-Hsi Chen
3-93
94. Multiple Sources of Information (Continued)
• Q-6: The same as Q-1, except that only the T, C, and F
-Des
fields were used.
-Narr
Narrow in on the set of fields that appeared most useful.
• Q-F: The same as Q-1, with 5 additional thesaurus words
+thesaurus
+phrases or phrases added automatically to each query
an approach to automatically discovering thesaurus terms
• Q-7: A combination of Q-1 and Q-6
whether combining the results of two relatively similar queries could
yield an improvement
乍看之下, Q-6 似乎是 Q-1 的一部份,沒有合併的需要,但是仔
細想想還是不一樣。如果選擇 terms 時,依某個標準, Q-1T,C,F
可能只取一小部份,但 Q-6 就不同。
Hsin-Hsi Chen
3-94
95. A Comparison of Six Automatic Methods of Constructing AdHoc Queries
Discarding the Description
Q-1 and Q-6, which are
and Narrative fields did not
similar, retrieve different
hurt performance appreciably.
sets of documents.
Phrases from the Narrative It is possible to automatically
construct a useful thesaurus for
were not helpful.
a collection.
Phrases improved performance
at low recall
Hsin-Hsi Chen
3-95
96. Interactive Query Creation
• The system created a query using method Q-1, and then a
person was permitted to modify the resulting query.
• Modifications
– add words from the Narrative field
– delete words or phrases from the query
– indicate that certain words or phrases should occur near each other
within a document
• Q-M
+addition Manual addition of words or phrases from the Narrative, and manual
+deletion deletion of words or phrases from the query
• Q-O
+addition
The same as Q-M, except that the user could also indicate that certain
+deletion words or phrases must occur within 50 words of each other
+proximity
Hsin-Hsi Chen
3-96
98. The effects of thesaurus terms and phrases on queries
that were created automatically and modified manually
Inclusion of unordered
window operators
Q-MF:
Thesaurus expansion
Before modification
Q-OF:
Thesaurus expansion
After modification
Thesaurus words and phrases
were added after the query
was modified, so they were
not used in unordered window
operators
Hsin-Hsi Chen
Cf. Q-O (42.7)
3-98
99. Okapi at TREC3 and TREC4
SE Robertson, S Walker, S Jones, MM
Hancock-Beaulieu, M Gatford
Department of Information Science
City University
Hsin-Hsi Chen
3-99
100. sim(d j , q) ≈
P(d j | R)
P(d j | R)
P (ki | R) × (1 − P (ki | R ))
≈ ∑ g i (d j ) g i (q) × log
P (ki | R) × (1 − P (ki | R ))
i =1
t
Vi + 0.5
V + 0.5 V − Vi + 0.5
P ( ki | R ) =
1 − P(ki | R) = 1 − i
=
V +1
V +1
V +1
ni − Vi + 0.5
n − V + 0.5 N − V − ni + Vi + 0.5
P ( ki | R ) =
1 − P(ki | R) = 1 − i i
=
N −V +1
N −V +1
N −V +1
Vi + 0.5 N − V − ni + Vi + 0.5
×
N −V +1
sim(d j , q) ≈ log V + 1
ni − Vi + 0.5 V − Vi + 0.5
×
N −V +1
V +1
(Vi + 0.5) × ( N − V − ni + Vi + 0.5)
= log
(ni − Vi + 0.5) × (V − Vi + 0.5) 3-100
Hsin-Hsi Chen
101. BM25 function in Okapi
(k1 + 1)tf (k3 + 1)qtf
avdl − dl
∑ w K + tf k + qtf + k2 | Q | avdl + dl
T ∈Q
3
(1)
Q: a query, containing terms T
(r + 0.5) × ( N − n − R + r + 0.5)
w(1): Robertson-Sparck Jones weight log (n − r + 0.5) × ( R − r + 0.5)
N: the number of documents in the collection (note: N)
n: the number of documents containing the term (note: n i)
R: the number of documents known to be relevant to a specific topic (note: V)
r: the number of relevant documents containing the term (note: V i)
K: k1((1-b)+b*dl/avdl)
k1, b, k2 and k3: parameters depend on the database and nature of topics
in TREC4 experiments, k1, k3 and b were 1.0-2.0, 8 and
0.6-0.75, respectively., and k2 was zero throughout
tf: frequency of occurrence of the term within a specific document (note: k i)
qtf: the frequency of the term within the topic from which Q was derived
dl: document length
avdl: averageChen
Hsin-Hsi document length
3-101