FOA Home | UP: Information-seeking agents


The InfoSpiders algorithm

InfoSpiders search on-line for information relevant to the user, by making autonomous decisions about what links to follow. Figure~(FOAref) shows the InfoSpiders implementation of the local selection algorithm. A central part of the system is the use of optional relevance feedback. The user may assess the relevance of (some of) the documents visited by InfoSpiders up to a certain point. Such relevance assessments take place asynchronously with respect to the on-line search, and alter the subsequent behaviors of agents on-line by changing the energy landscape of the environment. The process is akin to the replenishment of environmental resources; the user interacts with the environment to bias the search process. Let us first overview the algorithm at a high level; representation-dependent details will be given in the next subsections and experimental parameter values in the following section.

The user initially provides a list of keywords and a list of starting points, in the form of a bookmark file. The search is initialized by pre-fetching the starting documents. Each agent is ``positioned'' at one of these documents and given a random behavior (depending on the representation of agents) and an initial reservoir of energy.

Each agent ``senses'' its local neighborhood by analyzing the text of the document where it is currently situated. This way, the relevance of all neighboring documents --- those pointed to by the hyperlinks in the current document --- is estimated. Based on these link relevance estimates, the agent ``moves'' by choosing and following one of the links from the current document.

Next, the agent's energy is updated. Energy is needed in order to survive and move, i.e., continue to visit documents on behalf of the user. Agents are rewarded with energy if the visited documents appear to be relevant. The $e()$ function is used by an agent to evaluate the relevance of documents. If a document had previously been visited and assessed by the user, the user's assessment is used; if the document had not been visited before, its relevance must be estimated. This mechanism is implemented via a cache, which also speeds up the process by minimizing duplicate transfers of documents. While in the current, client-based implementation of InfoSpiders this poses no problem, caching is a form of communications and thus a bottleneck for the performance of distributed agents. In a distributed implementation, we imagine that agent will have local caches. When using the current implementation to simulate the performance of distributed InfoSpiders, we will simply set the cache size to zero.

Agents are charged energy costs for the network load incurred by transferring documents. The cost function $c()$ should depend on used resources, for example transfer latency or document size. For simplicity we will assume a constant cost for accessing any new document, and a (possibly smaller) constant cost for accessing the cache; this way stationary behaviors, such as going back and forth between a pair of documents, are discouraged.

Just as for graph search, instantaneous changes of energy are used as reward/penalty signals. This way agents adapt during their lifetime by Q-learning. This adaptive process allows an agent to modify its behavior based on prior experience, by learning to predict the best links to follow.

Depending on its current energy level, an agent may be killed or be selected for reproduction. In the latter case offspring are recombined by the use of {local crossover,} whereby an agent can only recombine with agents residing on the same document, if there are any. Offspring are also mutated, providing the variation necessary for adapting agents by way of evolution.

Finally, the user may provide the system with relevance feedback. It is important to stress that this process is entirely optional --- InfoSpiders can search in a completely unsupervised fashion once they are given a query and a set of starting points. Relevance feedback takes place without direct on-line interactions between user and agents. The user may assess any visited document $D$ with feedback $\in \{-1, 0, +1\}$. All the words in the document are automatically assessed by updating a ``feedback list'' of encountered words. Each word in this list, $k$, is associated with a signed integer $\omega_{k}$ that is initialized with 0 and updated each time any document is assessed by the user: \forall k \in D: \omega_{k} \leftarrow \omega_{k} + \phi(D). The word feedback list is maintained to keep a global profile of which words are relevant to the user.

The ultimate output of the algorithm is a flux of links to document, ranked according to some relevance estimate --- modulo relevance assessments by the user. The algorithm terminates when the population goes extinct for lack of relevant information resources, or if it is terminated by the user.


Top of Page | UP: Information-seeking agents | ,FOA Home


FOA © R. K. Belew - 00-09-21