Asddas
Asddas
Asddas
1
Relevance Feedback
After initial retrieval results are presented,
allow the user to provide feedback on the
relevance of one or more of the retrieved
documents.
Use this feedback information to reformulate
the query.
Produce new results based on reformulated
query.
Allows more interactive, multi-pass process.
2
Relevance Feedback Architecture
Query Document
String corpus
Revised Rankings
IR ReRanked
Query System Documents
1. Doc2
2. Doc4
Query 3. Doc5
Ranked 1. Doc1
Reformulation 2. Doc2 .
Documents 3. Doc3 .
1. Doc1 .
2. Doc2 .
3. Doc3
Feedback .
.
3
Query Reformulation
4
Query Reformulation for VSR
5
Optimal Query
1 1
qopt
Cr
d j
N Cr
d j
d j Cr d j Cr
6
Standard Rochio Method
qm q
d j max non relevant (d j )
d j Dr
9
Comparison of Methods
10
Relevance Feedback in Java VSR
11
Evaluating Relevance Feedback
By construction, reformulated query will rank
explicitly-marked relevant documents higher and
explicitly-marked irrelevant documents lower.
Method should not get credit for improvement on
these documents, since it was told their relevance.
In machine learning, this error is called testing on
the training data.
Evaluation should focus on generalizing to other
un-rated documents.
12
Fair Evaluation of Relevance Feedback
Remove from the corpus any documents for which
feedback was provided.
Measure recall/precision performance on the
remaining residual collection.
Compared to complete corpus, specific
recall/precision numbers may decrease since
relevant documents were removed.
However, relative performance on the residual
collection provides fair data on the effectiveness
of relevance feedback.
13
Why is Feedback Not Widely Used
14
Pseudo Feedback
15
Pseudo Feedback Architecture
Query Document
String corpus
Revised Rankings
IR ReRanked
Query System Documents
1. Doc2
2. Doc4
Query 3. Doc5
Ranked 1. Doc1
Reformulation 2. Doc2 .
Documents 3. Doc3 .
1. Doc1 .
2. Doc2 .
Pseudo 3. Doc3
.
Feedback .
16
PseudoFeedback Results
17
Thesaurus
18
Thesaurus-based Query Expansion
For each term, t, in a query, expand the query with
synonyms and related words of t from the
thesaurus.
May weight added terms less than original query
terms.
Generally increases recall.
May significantly decrease precision, particularly
with ambiguous terms.
interest rate interest rate fascinate evaluate
19
WordNet
A more detailed database of semantic
relationships between English words.
Developed by famous cognitive
psychologist George Miller and a team at
Princeton University.
About 144,000 English words.
Nouns, adjectives, verbs, and adverbs
grouped into about 109,000 synonym sets
called synsets.
20
WordNet Synset Relationships
Antonym: front back
Attribute: benevolence good (noun to adjective)
Pertainym: alphabetical alphabet (adjective to noun)
Similar: unquestioning absolute
Cause: kill die
Entailment: breathe inhale
Holonym: chapter text (part to whole)
Meronym: computer cpu (whole to part)
Hyponym: plant tree (specialization)
Hypernym: apple fruit (generalization)
21
WordNet Query Expansion
22
Statistical Thesaurus
23
Automatic Global Analysis
24
Association Matrix
w1 w2 w3 ..wn
w1 c11 c12 c13c1n
w2 c21
w3 c31
. .
. .
wn cn1
25
Normalized Association Matrix
26
Metric Correlation Matrix
28
Query Expansion with Correlation Matrix
29
Problems with Global Analysis
30
Automatic Local Analysis
At query time, dynamically determine similar
terms based on analysis of top-ranked retrieved
documents.
Base correlation analysis on only the local set of
retrieved documents for a specific query.
Avoids ambiguity by determining similar
(correlated) terms only within relevant documents.
Apple computer
Apple computer Powerbook laptop
31
Global vs. Local Analysis
32
Global Analysis Refinements
Only expand query with terms that are similar to
all terms in the query.
sim (ki , Q) c
k j Q
ij
33
Query Expansion Conclusions
34