FOA Home | UP: INDIVIDUALS' assessment: search engine performance


Underlying assumptions

As with all scientific models, our attempts to evaluate the performance of a search engine rests on a number of assumptions. Many of these involve the user, and simplifying assumptions about how users assess the relevance of documents.

{\bf Real FOA versus laboratory retrieval}

From the FOA perspective, users retrieve documents as part of an extended search process. They do this often, and because they need the information for something important to them. If we are to collect useful statistics about FOA, we must either capture large numbers of such users ``in the act,'' (i.e., in the process of a real, information seeking activity), or we must attempt to create an artificial, laboratory setting. The former is much more desirable, but makes strong requirements concerning a desirable corpus, a population of users, and access to their retrieval data. So typically we must do the latter, and work in a lab. The first big assumption, then, is that our lab setting is similar to real life; i.e., ``guinea pig'' users will have reactions that mirror real ones. {Yes and no. The Web generally and Web engines in particular obviously generate huge traffic, and potentially lots of interesting data about how real (versus experiment subjects) FOA. But access to these statistics, most conveniently collected by the Web search servers, is an increasingly value commodity! Many people would like to know what sorts of things people are searching for, and how they are search for it.

It's also important not to think of the Web that everyone is searching as ``the same corpus.'' One of the Web's most salient features is its dynamism. New documents are added and others (or at least the links to them!) are removed all the time. This makes comparing search retrieval results at two different times difficult.}

{\bf Inter-subject reliability}

Even if we assume we have a typical user and that this user is engaged in an activity that at least mirrors the natural FOA process, we have to believe that this user will assess relevance the same as everyone else! But clearly the educational background of each user, the amount of time he/she has to devote to the FOA process relative to the rest of the task and similar factors will make one user's reaction differ from another's. For example, there is some evidence that stylistic variations also impact perceived relevance[Karlgren96] . The {\em consensual} relevance statistic (cf. Section §4.3.2 ) is one mechanism for aggregating across multiple users.

This becomes a concern with INTER-SUBJECT RELIABILITY . If we intend to make changes to document representations based on one user's RelFbk opinions, we would like to believe that there is at least some consistency between this user's opinion and those of others. This is a critical area for further research, but some encouraging, prelimary results are available. For example, users of a multi-lingual retrieval system which presents some documents in their first language (``mother tongue'') and others in foreign languages they read less well, seem to be able to provide consistent RelFbk data even for documents in their second, weaker language! [Sheridan96] .

{\bf Independence of inter-document relevance assessments}

Finally, notice that the atomic element of data collection for relevance assessments is typically a $(query_{i}, document_{j})$ pair: $ document_{j}$ is relevant to $query_{i}$. Implicitly, this assumes that the relevance of a document can be assessed independently of assessments of other documents. Again, this is a very questionable assumption.

Recall also that often the {proxy} on which the user's relevance assessment depends is distinct from the document itself. The user sees only the proxy, a small sample of the document in question, for example its title, first paragraph, or bibliographic citation. While we must typically take user reaction to the proxy as an opinion about the whole document, this inference depends critically on how informative the proxy is. Cryptic titles and very condensed citation formats can make these judgements suspect. And of course the user's ultimate assessment of whether a document is relevant, after having read it, remains a paradox.


Top of Page | UP: INDIVIDUALS' assessment: search engine performance | ,FOA Home


FOA © R. K. Belew - 00-09-21