From: Tom White Subject: MAS837 Web Browsing Experiment To: lieber@media.mit.edu (Henry Lieberman) Date: Wed, 5 Mar 1997 09:19:30 -0500 (EST) Cc: nelson@media.mit.edu (Nelson Minar), mcg@media.mit.edu (Steven McGeady) "Sorry this letter is so long, I didn't have time to write a shorter one." -- Mark Twain =========================== Tom White MAS 837 Collaborative Web Browsing Experiment My experiment group consisted of Nelson Minar, Steven McGeady, and myself. I was the user of the system, using the agent to assist me in finding information on the web. When presented with a question to answer, like "What is the chemical name for the biochemical compound caffeine?", I would usually first ask my agent if he knew the answer explicitly. Though the agent invariably did not, this would represent the optimal solution. That is, if agents are supposed to save me time completing my tasks, then getting the solution at the outset clearly saves me the most time. Nelson brings up an interesting point in asking if I would have trusted the agent were he to provide the answer initially, and in cases where I had little domain knowledge of the problem I would have wanted confirmation. Thus, I would expect any agent I would really use to always be ready to also answer the important question, "How do you know that?". Answering the question, "When were LEGO toys first introduced to the United States?" brought up an interesting point about the experiment itself. The agent suggested that I go to the web site for the LEGO company, www.lego.com. This led to a history of LEGO page which had in depth information about the company, but did not explicitly include the answer to the question. We continued to search for the answer because I knew the question must have been gleaned from an existing web page (and we later did find it in the LEGO FAQ), but in a situation outside of the experiment I would not have this knowledge and I would have tried to infer the knowledge from the information on the LEGO history page. This suggests that adding questions which have no guaranteed explicit answer, which is much more like work I do in practice, would have had an effect on the perceived usefulness of the agent. Though the incorrect answer on the LEGO question was understandable, in this and other incorrect answers there was a higher level of frustration than if I had arrived at the dead end on my own. After all, I'm perfectly capable of finding the wrong answers without the help of an agent, and perhaps without the agents help, I would have arrived at the correct answer somewhere else. Though perhaps irrational, it would be interesting to see test what people's emotive responses to bad information from the agent. I would expect that people are not used to receiving wrong answers from a computer. Judging by the fact that many people vent frustration freely at computers ("Stupid computer!"), unsure answers should be presented as such. Both questions and answers became more refined over the course of the experiment. Initially, the agent was unsure if I knew what yahoo was, and would answer "Try looking on www.yahoo.com". Later my questions would use my knowledge of yahoo: "Would you suggest searching yahoo for this?". And answers would also use my perceived understanding of yahoo: "There is probably a category on this at yahoo." These later transactions are much more economical in terms of exchanged information and suggest that a good agent would be able to use shared knowledge to compose efficient responses. To be very useful, an agent will need to know an order of magnitude more information about a topic than I have to overcome the transaction costs in using the agent at all. An agent with a simple bag of tricks is of little use after the tricks are learned. Even in the short span of this experiment, the two stock answers "Try altavista" and "Try yahoo" were useless as I could try both (in separate windows) in the time it would take to ask the question. Of course, if the agent had his own computer and were able to process information faster than me, this would not have been the case - his answer then might be "I don't know, give me a minute to check a couple of places". After I had learned the stock answers, I would sometimes ask the agent just to see if he had any other approaches. This led to the amusing incidents where I would ask the question, get the response, and then do something else - instinctively my agent would say, "No not that!". This suggests that people will not always take the agent's advice, and that giving the agent the ability to know whether the advice was used would be very useful information in understanding the user's preferences. -Tom -- =========================================================== Tom White tom@media.mit.edu ===========================================================