Extended Gloss Overlaps as a Measure of Semantic Relatedness
Satanjeev Banerjee Ted Pedersen
Carnegie Mellon University University of Minnesota Pittsburgh, PA 15213 Duluth,MN, 55812 satanjeev.banerjee@cs.emu.edu tpederse@umn.edu
Abstract relatedness between two concepts based on their dictionary
definitions. This paper presents a new measure of semantic re- This paper begins with a brief description of WordNet, latedness between concepts that is based on the which was used in developing our measure. Then we intro- number of shared words (overlaps) in their defini- duce the extended gloss overlap measure, and present two dis- tions (glosses). This measure is unique in that it tinct evaluations. First, we conduct a comparison to previous extends the glosses of the concepts under consid- human studies of relatedness and find that our measure has eration to include the glosses of other concepts to a correlation of at least 0.6 with human judgments. Second, which they are related according to a given concept we introduce a word sense disambiguation algorithm that as- hierarchy. We show that this new measure reason- signs the most appropriate sense to a target word in a given ably correlates to human judgments. We introduce context based on the degree of relatedness between the target a new method of word sense disambiguation based and its neighbors. We find that this technique is more accurate on extended gloss overlaps, and demonstrate that it than all but one system that participated in the S E N S E V A L - 2 fares well on the S E N S E V A L - 2 lexical sample data. comparative word sense disambiguation exercise. Finally we present an extended analysis of our results and close with a brief discussion of related work. 1 Introduction Human beings have an innate ability to determine if two con- 2 WordNet cepts are related. For example, most would agree that the WordNet is a lexical database where each unique meaning of automotive senses of car and tire are related while car and a word is represented by a synonym set or synset. Each synset tree are not. However, assigning a value that quantifies the has a gloss that defines the concept that it represents. For ex- degree to which two concepts are related proves to be more ample the words car, auto, automobile, and motorcar consti- difficult [Miller and Charles, 1991 J. In part, this is because tute a single synset that has the following gloss: four wheel relatedness is a very broad notion. For example, two concepts motor vehicle, usually propelled by an internal combustion can be related because one is a more general instance of the engine. Many glosses have examples of usages associated other (e.g., a car is a kind of vehicle) or because one is a part with them, such as "he needs a car to get to work" of another (e.g., a tire is a part of a car). Synsets are connected to each other through explicit se- This paper introduces extended gloss overlaps, a measure mantic relations that are defined in WordNet. These relations of semantic relatedness that is based on information from a only connect word senses that are used in the same part of machine readable dictionary. In particular, this measure takes speech. Noun synsets are connected to each other through advantage of hierarchies or taxonomies of concepts as found hypemym, hyponym, meronym, and holonym relations. in resources such as the lexical database WordNet [Fellbaum, If a noun synset A is connected to another noun synset B 1998]. through the is-a-kind-of relation then B is said to be a hy- Concepts are commonly represented in dictionaries by pernym of synset B and B a hyponym of A. For example word senses, each of which has a definition or gloss that the synset containing car is a hypernym of the synset con- briefly describes its meaning. Our measure determines how taining hatchback and hatchback is a hyponym of car. If a related two concepts are by counting the number of shared noun synset A is connected to another noun synset B through words (overlaps) in the word senses of the concepts, as well the is-a-part-of relation then A is said to be a meronym of as in the glosses of words that are related to those concepts B and B a holonym of A. For example the synset contain- according to the dictionary. These related concepts are ex- ing accelerator is a meronym of car and car is a holonym of plicitly encoded in WordNet as relations, but can be found accelerator. Noun synset A is related to adjective synset B in any dictionary via synonyms, antonyms, or also-see refer- through the attribute relation when B is a value of A. For ences provided for a word sense. To our knowledge, this work example the adjective synset standard is a value of the noun represents the first attempt to define a quantitative measure of synset measure.
NATURAL LANGUAGE 805
Taxonomic or is-a relations also exist for verb synsets. words, the more related two senses are, the more words their Verb synset A is a hypernym of verb synset B if to B is one glosses will share. way to A. Synset B is called a troponym of A. For example WordNet provides explicit semantic relations between the verb synset containing the word operate is a hypernym of synsets, such as through the is-a or has-part links. However drive since to drive is one way to operate. Conversely drive such links do not cover all possible relations between synsets. is a troponym of operate. The troponym relation for verbs is For example, WordNet encodes no direct link between the analogous to the hyponym relation for nouns, and henceforth synsets car and tire, although they arc clearly related. We we shall use the term hyponym instead of the term troponym. observe however that the glosses of these two synsets have Adjective synsets are related to each other through the similar words in common. Similar to Lesk\s premise, we assert that to relation. For example the synset containing the adjective such overlaps provide evidence that there is an implicit rela- last is said to be similar to the synset containing the adjec- tion between those synsets. Given such a relation, we fur- tive dying. Verb and adjective synsets are also related to each ther conclude that synsets explicitly related to car are thereby other through cross-reference also-see links. For example, also related to synsets explicitly related to tire. For exam- the adjectives accessible and convenient are related through ple, we conclude that the synset vehicle (which is the hyper- also-see links. nym synset of car) is related to the synset hoop (which is the While there are other relations in WordNet, those described hypernym synset of tire). Thus, our measure combines the above make up more than 93% of the total number of links in advantages of gloss overlaps with the structure of a concept WordNet. These are the measures we have employed in the hierarchy to create an extended view of relatedness between extended gloss overlap measure. synsets. We base our measure on the idea of an extended set of com- 3 The Extended Gloss Overlap Measure parisons. When measuring the relatedness between two input synsets, we not only look for overlaps between the glosses of Gloss overlaps were introduced by I Lesk, 1986] to perform those synsets, but also between the glosses of the hypernym, word sense disambiguation. The Lesk Algorithm assigns a hyponym, meronym, holonym and troponym synsets of the sense to a target word in a given context by comparing the input synsets, as well as between synsets related to the input glosses of its various senses with those of the other words in synsets through the relations of attribute, similar-to and also- the context. That sense of the target word whose gloss has the see. Not all of these relations are equally helpful, and the op- most words in common with the glosses of the neighboring timum choice of relations to use for comparisons is possibly words is chosen as its most appropriate sense. dependent on the application in which the overlaps-measure For example, consider the glosses of car and tire: four is being employed. Section 6 compares the relative efficacy wheel motor vehicle usually propelled by an internal com- of these relations when our measure of relatedness is applied bustion engine and hoop that covers a wheel, usually made of to the task of word sense disambiguation. rubber and filled with compressed air. The relationship be- tween these concepts is shown in that their glosses share the 3.2 Scoring M e c h a n i s m content word wheel. However, they share no content words We introduce a novel way of finding and scoring the overlaps with the gloss of tree: a tall perennial woody plant having a between two glosses. The original Lesk Algorithm compares main trunk and branches forming a distinct elevated crown. the glosses of a pair of concepts and computes a score by The original Lesk Algorithm only considers overlaps counting the number of words that are shared between them. among the glosses of the target word and those that surround This scoring mechanism does not differentiate between single it in the given context. This is a significant limitation in that word and phrasal overlaps and effectively treats each gloss as dictionary glosses tend to be fairly short and do not provide a "bag of words". For example, it assigns a score of 3 to the sufficient vocabulary to make line grained distinctions in re- concepts drawing paper and decal, which have the glosses latcdness. As an example, the average length of a gloss in paper thai is specially prepared for use in drafting and the WordNet is just seven words. The extended gloss overlap art of transferring designs from specially prepared paper to measure expands the glosses of the words being compared to a wood or glass or metal surface. There are three words that include glosses of concepts that are known to be related to the overlap, paper and the two-word phrase specially prepared. concepts being compared. There is a Zipfian relationship [Zipf, 1935J between the Our measure takes as input two concepts (represented by lengths of phrases and their frequencies in a large corpus of two WordNet synsets) and outputs a numeric value that quan- text. The longer the phrase, the less likely it is to occur mul- tifies their degree of semantic relatedness. In the sections that tiple times in a given corpus. A phrasal n-word overlap is a follow, we describe the foundations of the measure and how much rarer occurrence than an single word overlap. There- it is computed. fore, we assign an n word overlap the score of n 2 . This gives an n-word overlap a score that is greater than the sum of the 3.1 Using Glosses of Related Senses scores assigned to those n words if they had occurred in two There are two fundamental premises to the original Lesk A l - or more phrases, each less than n words long. gorithm. First, words that appear together in a sentence will For the above gloss pair, we assign the overlap paper a be used in related senses. Second, and most relevant to our score of 1 and specially prepared a score of 4, leading to a to- measure, the degree to which senses are related can be iden- tal score of 5. Note that if the overlap was the 3-word phrase tified by the number of overlaps in their glosses. In other specially prepared paper, then the score would have been 9.
806 NATURAL LANGUAGE
Thus, our overlap detection and scoring mechanism can be Our relatedness measure is based on the set of all possi- formally defined as follows: When comparing two glosses, ble pairs of relations from the list of relations described in we define an overlap between them to be the longest sequence section 3.1. For purposes of illustration, assume that our of one or more consecutive words that occurs in both glosses set of relations RELS = (where hype such that neither the first nor the last word is a function word, and hypo arc contractions of hypernym and hyponym respec- that is a pronoun, preposition, article or conjunction. If two tively). Further assume that our set of relation pairs REL- or more such overlaps have the same longest length, then the PAIRS = {(gloss, gloss), (hype, hype), (hypo, hypo), (hype, overlap that occurs earliest in the first string being compared gloss), (gloss, hype)}. Then the relatedness between synsets is reported. Given two strings, the longest overlap between A and B is computed as follows: them is detected, removed and in its place a unique marker is placed in each of the two input strings. The two strings thus obtained are then again checked for overlaps, and this process continues until there are no longer any overlaps be- tween them. The sizes of the overlaps thus found are squared Observe that due to our pair selection constraint as de- and added together to arrive at the score for the given pair of scribed above, relatedness(A, B) is indeed the same as glosses. relatedness(B, A).
3.3 Computing Relatedness 4 Comparison to H u m a n Judgements
The extended gloss overlap measure computes the relatedness Our comparison to human judgments is based on three previ- between two input synsets A and B by comparing the glosses ous studies. iRubenstein and Goodenough, 1965] presented of synsets that are related to A and B through explicit rela- human subjects with 65 noun pairs and asked them how sim- tions provided in WordNet. ilar they were on a scale from 0.0 to 4.0. [Miller and Charles, We define RELS as a (non-empty) set of relations that con- 199l] took a 30 pair subset of this data and repeated this ex- sists of one or more of the relations described in Section 2. periment, and found results that were highly correlated (.97) That is, RELS is a relation defined in WordNet}. to the previous study. The results from the 30 pair set com- Suppose each relation has a function of the mon to both studies were used again by [Budanitsky and same name that accepts a synset as input and returns the gloss Hirst, 2001] in an evaluation of five automatic measures of se- of the synset (or synsets) related to the input synset by the mantic relatedness that will be mentioned in Section 7. They designated relation. report that all of the measures fared relatively well, with the For example, assume r represents the hypernym relation. lowest correlation being .74 and the highest .85. When com- Then r(A) returns the gloss of the hypernym synset of A. r paring our measure to these 30 words, we find that it has a can also represent the gloss "relation" such that r(A) returns correlation of .67 to the Miller and Charles human study, and the gloss of synset A, and the example "relation" such that one of .60 to the Rubenstein and Goodenough experiment. r(A) returns the example string associated with synset A. If We do not find it discouraging that the correlation of ex- more than one synset is related to the input synset through tended gloss overlaps is lower than those reported by Budan- the same relation, their glosses are concatenated and returned. itsky and Hirst for other measures. In fact, given the com- We perform this concatenation because we do not wish to plexity of the task, it is noteworthy that it demonstrates some differentiate between the different synsets that are all related correlation with human judgement. The fact that the test set to the input synset through a particular relation, but instead contains only 30 word pairs is a drawback of human evalu- are only interested in all their definitional glosses. If no synset ation, where rigourous studies are by necessity limited to a is related to the input synset by the given relation then the null small number of words. Automatic measures can be evalu- string is returned. ated relative to very large numbers of words, and we believe Next, form a non-empty set of pairs of relations from the such an evaluation is an important next step in order to estab- set of relations above. The only constraint in forming such lish where differences lie among such measures. As a final pairs is that if the pair (r1,r2) is chosen, RELS), point of concern, concepts can be related in many ways, and then the pair ( r 2 , r 1 ) must also be chosen so that the relat- it is possible that a human and an automatic measure could edness measure is reflexive. That is, rclatedness(A, B) — rely on different yet equally well motivated criteria to arrive relatedness(B,A). Thus, we define the set RELPA1RS as at diverging judgements. follows: 5 Application to W S D We have developed an approach to word sense disambigua- tion based on the use of the extended gloss overlap measure. Finally, assume that score{) is a function that accepts as In our approach, a window of context around the target input two glosses, finds the phrases that overlap between word is selected, and a set of candidate senses is identified them and returns a score as described in the previous sec- for each content word in the window. Assume that the win- tion. Given all of the above, the relatedness score between dow of context consists of 2n + 1 words denoted by Wi, the input synsets A and B is computed as follows: where the target word is w0. Further let \wi\ denote the number of candidate senses of word wi, and let these senses be denoted by
NATURAL LANGUAGE 807
Next we assign to each possible sense k of the target word a SenseScorek computed by adding together the relatedness Table 1: WSD Evaluation Results scores obtained by comparing the sense of the target word in question with every sense of every non-target word in the window of context. The SenseScore for sense is com- puted as follows:
That sense with the highest SenseScore is judged to be the
most appropriate sense for the target word. If there are on average a senses per word and the window of context is N words long, there are pairs of sets of synsets to be compared, which increases linearly with N.
5.1 Experimental Data
Our evaluation data is taken from the English lexical sample task of SENSEVAL-2 lEdmonds and Cotton, 2001]. This was a comparative evaluation of word sense disambiguation sys- of the precision and recall: tems that resulted in a large set of results and data that are now freely available to the research community. This data consists of 4,328 instances each of which con- tains a sentence with a single target word to be disam- Table 1 lists the precision, recall and F-measure for all biguated, and one or two surrounding sentences that provide the S E N S E V A L - 2 words when disambiguated using a window additional context. A human judge has labeled each target size of 3. The overall results for our approach are shown as word with the most appropriate WordNet sense for that con- Overall*, and these are also broken down based on the part of text. A word sense disambiguation system is given these same speech (POS) of the target word. This table also displays re- instances (minus the human assigned senses) and must output sults from other baseline or representative systems. The Orig- what it believes to be the most appropriate senses for each inal Lesk results are based on utilizing the glosses of only the of the target words. There are 73 distinct target words: 29 input synsets and nothing else. While this does not exactly nouns, 29 verbs, and 15 adjectives, and the part of speech of replicate the original Lesk Algorithm it is quite similar. The the target words is known to the systems. random results reflect the accuracies obtained by simply se- lecting randomly from the candidate senses. 5.2 E x p e r i m e n t a l Results The Sval-First, Sval-Second, and Sval-Third results are For every instance, function words are removed and then a from the top three most accurate fully automatic unsupervised window of words is defined such that the target word is at the systems in the S E N S E V A L - 2 exercise. This is the class of sys- center (if possible). Next, for every word in the window, can- tems most directly comparable to our own, since they require didate senses are picked by including the synsets in WordNet no human intervention and do not use any manually created that the word belongs to, as well as those that an uninfected training examples. These results show that our approach was form of the word belong to (if any). Given these candidate considerably more accurate than all but one of the participat- senses, the algorithm described above finds the most appro- ing systems. priate sense of the target word. These results are significant because they are based on It is possible that there be a tie among multiple senses for a very simple algorithm that relies on assigning relatedness the highest score for a word. In this case, all those senses are scores to the senses of a target word and the senses of its im- reported as answers and partial credit is given if one of them mediately adjacent neighbors. While the disambiguation re- prove to be correct. This would be appropriate if a word were sults could be improved via the combination of various tech- truly ambiguous in a context, or if the meanings were very niques, our focus is on developing the extended gloss overlap closely related and it was not possible to distinguish between measure of relatedness as a general tool for Natural Language them. It is also possible that no sense gets more than a score Processing and Artificial Intelligence. of 0 - in this case, no answer is reported since there is no evidence to choose one sense over another. 6 Discussion Given the answers generated by the algorithm, we compare Table 1 shows that the disambiguation results obtained using them with the human decided answers and compute precision the extended gloss overlap measure of semantic relatedness (the number of correct answers divided by the number of an- are significantly better than both the random and Original swers reported) and recall (the number of correct answers di- Lesk baselines. In the Original Lesk Algorithm, relatedness vided by the number of instances). These two values can be between two synsets is measured by considering overlaps be- summarized by the F-measure, which is the harmonic mean tween the glosses of the candidate senses of the target word
808 NATURAL LANGUAGE
and its neighbors. By adding the glosses of related synsets, Table 2: Best Relation Pair Sets the results improve by 89% relative (16.3% absolute). This shows that overlaps between glosses of synsets explicitly re- lated to the input synsets provide almost as much evidence Nouns about the implicit relation between the input synsets as do overlaps between the glosses of the input synsets themselves. Table 1 also breaks down the precision, recall and F- measure according to the part of speech of the target word. Observe that the noun target words are the easiest to disam- biguate, followed by the adjective target words. The verb tar- get words prove to be the hardest to disambiguate. We at- tribute this to the fact that the number of senses per target Adjectives word is much smaller for the nouns and adjectives than it is for the verbs. Nouns and adjective target words have less than 5 candidate senses each on average, whereas verbs have close to 16. Thus, when disambiguating verbs there are more choices to be made and more chances of errors. The results in table 1 arc based on a 3 word window of context. In other experiments we used window sizes of 5, 7, 9 and 11. Although this increase in window size provides Verbs more data to the disambiguation algorithm, our experiments show that this does not significantly improve disambiguation results. This suggests that words that are in the immediate vicinity of the target word are most useful for disambiguation, and that using larger context windows is either adding noise or redundant data. The fact that small windows are best cor- responds with earlier studies on human subjects that showed that humans often only require a window of one or two sur- rounding words to disambiguate a target word [Choueka and Lusignan, 19851. differ with the part of speech of the input synsets. I able 2 We also tried to normalize the overlap scores by the max- lists the top 5 minimal relation pair sets for target words be- imum score that two glosses can generate, but that did not longing to the three parts of speech, where relation pair sets help performance. We believe that the difference between the are ranked on the F-measure achieved by using them in dis- sizes of various glosses in terms of number of words is small ambiguation. Note that in this table, hypo, mero, also, attr, enough to render normalization unnecessary. and hype stand for the relations hyponym, meronym, also- see, attribute, and hypernym respectively. Also in the table 6.1 Evaluating Individual Relation Pairs the relation pair ri-r2 refers to the minimal relation pair set Our measure of relatcdness utilizes pairs of relations picked and otherwise. from the list of relations in section 3.1. In this section we at- Perhaps one of the most interesting observations is that tempt to quantify the relative effectiveness of these individual no single minimal relation pair set achieves F-measure even relation pairs. Specifically, given a set of relations RELS, we close to that achieved using all the relation pairs (0.42, 0.35, create all possible minimal relation pair sets, where a mini- and 0.26 for nouns, verbs, and adjectives respectively), sug- mal relation pair set is defined as the set that contains either gesting that there is no single relation pair that generates a exactly one relation pair or exactly two relation lot of evidence for the relatedness of two synsets. This find- pairs \, where~ . For example ing also implies that the richer the set of explicit relations {(gloss, gloss)} and {(hype, gloss), (gloss, hype)} are both between synsets in WordNet, the more accurate the overlap minimal relation pair sets. based measure of semantic relatedness will be. This fact is We evaluate each of these minimal relation pair sets by borne out by the comparatively high accuracy attained by performing disambiguation using only the given minimal re- nouns which is the best developed portion of WordNet. lation pair set and computing the resulting precision, recall For nouns, Table 2 shows that comparisons between the and F-measure. The higher the F-measure, the "better" the glosses of the hyponyms and meronyms of the input synsets quality of the evidence provided by gloss overlaps from that and also between the glosses of the input synsets are most in- minimal relation pair set. In effect we are decomposing the formative about the relatedness of the synsets. Interestingly, extended gloss overlap measure into its individual pieces and although both hyponyms and hypernyms make up the is-a hi- assessing how each of those pieces perform individually. erarchy, the hypernym relation does not provide an equivalent Recall that each part of speech has a different set of rela- amount of information. In WordNet, a noun synset usually tions associated with it. The difference in the numbers and has a single hypernym (parent) but many hyponyms (chil- types of relations available for the three parts of speech leads dren), which implies that the hyponym relation provides more us to expect that the optimal minimal relation pair sets will definitional glosses to the algorithm than the hypernym re-
NATURAL LANGUAGE 809
lation. This assymetry also exists in the holonym-meronym This work has been supported by a National Science pair of relations. Most noun synsets have less holonym (is-a- Foundation Faculty Early CAREER Development award part-of) relations than meronyms (has-part) resulting in more (#0092784) and NSF Grant no. REC-9979894. Any opin- glosses from the meronym relation. These further confirm ions, findings, conclusions, or recommendations expressed in that the accuracy of the relatedness measure depends at least this publications are those of the authors and do not necessar- partly on the number of glosses that we can access for a given ily reflect the views of the NSF or the official policies, either pair of synsets. expressed or implied, of the sponsors or of the United States This finding also applies to adjectives. The two most fre- Government. quent relations, the also-see relation and the attribute rela- tion, rank highest among the useful relations for adjectives. References Similarly for verbs, the hyponym relation again appears to be [Budanitsky and Hirst, 2001] A. Budanitsky and G. Hirst. extremely useful. Interestingly, for all three parts of speech, Semantic distance in WordNet: An experimental, the example "relation" (which simply returns the example application-oriented evaluation of five measures. In Work- string associated with the input synset) seems to provide use- shop on WordNet and Other Lexical Resources, Second ful information. This is in keeping with the S E N S E V A L - 2 meeting of the North American Chapter of the Association results where the addition of example strings to a Lesk-like for Computational Linguistics, Pittsburgh, June 2001. baseline system improves recall from 16% to 23%. [Choueka and Lusignan, 1985] Y. Choueka and S. Lusignan. Disambiguation by short contexts. Computers and the Hu- 7 Related Work manities, 19:147-157, 1985. A number of measures of semantic relatedness have been pro- lEdmonds and Cotton, 2001] P. Edmonds and S. Cotton, ed- posed in recent years. Most of them rely on the noun taxon- itors. Proceedings of the Senseval-2 Workshop. Asso- omy of the lexical database WordNet. iResnik, 1995] aug- ciation for Computational Linguistics, Toulouse, France, ments each synset in WordNet with an information content 2001. value derived from a large corpus of text. The measure of re- latedness between two concepts is taken to be the information [Fellbaum, 1998] C. Fellbaum, editor. WordNet: An elec- content value of the most specific concept that the two con- tronic lexical database. M I T Press, 1998. cepts have in common. [Jiang and Conrath, 1997] and [Lin, Uiang and Conrath, 1997] J. Jiang and D. Conrath. Semantic 19971 extend Resnik's measure by scaling the common in- similarity based on corpus statistics and lexical taxonomy. formation content values by those of the individual concepts. In Proceedings on International Conference on Research Our method of extended gloss overlaps is distinct in that it in Computational Linguistics, Taiwan, 1997. takes advantage of the information found in the glosses. The other measures rely on the structure of WordNet and corpus LLesk, 1986] M. Lesk. Automatic sense disambiguation us- statistics. In addition, the measures above are all limited to ing machine readable dictionaries: How to tell a pine cone relations between noun concepts, while extended gloss over- from a ice cream cone. In Proceedings of SIGDOC '86, laps can find relations between adjectives and verbs as well. 1986. [Lin, 1997] D. Lin. Using syntactic dependency as a local 8 Conclusions context to resolve word sense ambiguity. In Proceedings of the 35th Annual Meeting of the Association for Compu- We have presented a new measure of semantic relatedness tational Linguistics, pages 64-71, Madrid, July 1997. based on gloss overlaps. A pair of concepts is assigned a value of relatedness based on the number of overlapping [Miller and Charles, 1991] G.A. Miller and W.G. Charles. words in their respective glosses, as well as the overlaps Contextual correlates of semantic similarity. Language found in the glosses of concepts they are related to in a given and Cognitive Processes, 6(1): 1-28, 1991. concept hierarchy. We have evaluated this measure relative [Resnik, 1995] P. Resnik. Using information content to eval- to human judgements and found it to be reasonably corre- uate semantic similarity in a taxonomy. In Proceedings of lated. We have carried out a word sense disambiguation ex- the 14th International Joint Conference on Artificial Intel- periment with the S E N S E V A L - 2 lexical sample data. We find ligence, Montreal, August 1995. that disambiguation accuracy based on extended gloss over- [Rubenstein and Goodenough, 1965] H. Rubenstein and J.B. laps is more accurate than all but one of the participating Goodenough. Contextual correlates of synonymy. Com- S E N S E V A L - 2 systems. putational Linguistics, 8:627-633, 1965. [Zipf, 1935] G. Zipf. The Psycho-Biology of Language. Acknowledgements Houghton Mifflin, Boston, M A , 1935. Thanks to Jason Rennie for his WordNet: :QueryData mod- ule, and to Siddharth Patwardhan for useful discussions, ex- perimental help, and for integrating the extended gloss over- lap measure into his WordNet::Similarity module. Both of these modules are freely available from the Comprehensive Perl Archive Network (search.cpan.org).