Ten Days in the Life of Hotbot and Snap - A Case Study

Extended Abstract
Judit Bar-Ilan
School of Library, Archive and Information Studies
The Hebrew University of Jerusalem, Jerusalem, Israel
e-mail: judit@cc.huji.ac.il


We present here the results of a case study in which twenty "random queries" were submitted for ten consecutive days to Hotbot and Snap, two search tools that draw their results from the database of Inktomi. The results show huge daily fluctuations in the number of hits retrieved by Hotbot, and high stability in the hits displayed by Snap. These findings are intended to make users of Hotbot aware of its instability as of October 1999, and they raise questions about the reliability of previous studies estimating the size of Hotbot based on its overlap with other search engines.

Hotbot (http://www.hotbot.com) and Snap's power search (named simply Snap from this point on) (http://www.snap.com/search/power/form/0,179,home-0,00html?st.sn.srch.0.pwr) both base the displayed search results on the results they receive from the search engine Inktomi (http://www.inktomi.com). Even though they are called search engines, actually they are only different interfaces of the same engine, exhibiting more less the same search capabilities. Inktomi carries out the bulk of the work: it crawls the Web, it maintains the index (or indexes) based upon the results of crawling, and processes the queries submitted to it by the different interfaces. Thus, we shall call Hotbot and Snap search interfaces rather than the usual terminology: search engines.

It has been noted before ([1], [2]), that Hotbot and Snap do not retrieve exactly the same results, even though according to Sullivan, both of them base their results on the complete Inktomi database. Notess claims, that Inktomi runs several clusters of computers, and the different search interfaces base their results on different clusters.

The aim of the current case study was to observe the retrieval patterns of these search interfaces over a period of ten days, between September 26, 1999 and October 5, 1999. The same set of twenty "random queries" was presented to Hotbot and Snap each day. The twenty queries we used were based on actual information needs of a "random family". They utilized the different search capabilities of the search interfaces (single word, exact phrase, all the words, complex boolean and "links to this URL"). The number of results for the queries differed in size (up to 40, 40-200, 300-550, 600-1000). These two facets gave rise to twenty different categories - one query in each category.

Results and Discussion

In spite of the differences in query types and size of answers, the same pattern was detected for all queries: Snap retrieved more or less the same number of hits on each query each day (as can be expected), while for Hotbot, we witnessed a trend of "up and downs" in the number of daily results. The same trend was observed for all twenty queries. The results for the daily averages are displayed in Diagram 1.

The maximum fluctuation averaged over the twenty queries was 3.98 for Hotbot - meaning that for a given query, on some days nearly four times as many results were retrieved than for some other days(!), while the maximum fluctuation averaged over the twenty queries was only 1.35 for Snap. Statistical tests show that the large daily fluctuations are almost always highly significant for Hotbot and non-significant for Snap.

Some of the fluctuations can probably be explained by the dynamic nature of the Web and by the periodical updating/rebuilding of Inktomi's database, and by time-outs occurring at Inktomi. However, because of the relative stability of Snap, it is difficult to conceive that only Inktomi is to be blamed for these huge fluctuations in the number of search results for Hotbot (for a specific query, we received 9.5 times more answers on a certain day, then five days later, while exactly the same number of hits were retrieved by Snap on both of these days).

Diagram 1: Average number of hits over the twenty queries for Hotbot and Snap for each day of the study
One may suggest, that when retrieving a large number of hits many of them are duplicates, and even more nonrelevant to the query. We ruled out these two possibilities: we count only nonduplicate hits, and each URL was visited, and its contents saved on our local disks, the first time it was retrieved by one of the interfaces. Each of these documents was tested for appearance of the search terms in them. A very high percentage (93.5% on the average) of the retrieved documents did contain the search terms. For Hotbot, we observed a pattern of dropping URLs and later rediscovering them.

To the best of our knowledge, such findings on the instability of Hotbot have not been published before. The findings of this study raise serious questions about the reliability of previous works trying to estimate the size of Hotbot, based on a set of random queries( e.g. [3-5]). Which Hotbot did they query? Hotbot on "good day" or Hotbot on a "bad day"? Moreover, such high degree of instability undermines the search efforts of the users, results found through the interface disappear the next day and reappear again a few days later. Sometimes this even happens to the most highly ranked result.

We speculate that the main cause of the problems is the Hotbot search interface, which cannot handle well its workload. The high workload may very well be a result of its popularity. However, its popularity may decrease once the users become aware of its instability. We believe that users prefer stable search tools, which enable them to easily locate results they saw the day before. The search tools should inform their users of problems even if these problems are only temporary.

In our belief, additional case studies on search engine performance and comparison, carried out in an objective, impartial manner can throw light on shortcomings of the different search tools, and can encourage them to enhance the quality control of their search results.


  1. Notess, G. R. (1999). The Many Faces of Inktomi. Database, 22 (2): 63-65. http://onlineinc.com/database/DB1999/net4.html
  2. Sullivan D. Search Engine Watch: Subscriber-Only Area.  http://www.searchenginewatch.com/subscribers/
  3. Bharat, K. and Broder, A. (1998). A Technique for Measuring the Relative Size and Overlap of Public Web Search Engines. Proceedings of the 7th International World Wide Web Conference, April 1998, Computer Networks and ISDN Systems, 30: 379-388.
  4. Lawrence, S. and Giles, C. L. (1998). Searching the World Wide Web. Science,  280: 98-100.
  5. Lawrence, S. and Giles, C. L. (1999). Accessibility and Distribution of Information on the Web. Nature 400: 107-110.