Date: Fri, 28 Feb 1997 19:50:22 -0500 From: nelson@media.mit.edu (Nelson Minar) To: Henry Lieberman Subject: notes on exercise In-Reply-To: <9703010041.AA09722@ml.media.mit.edu> References: <9703010041.AA09722@ml.media.mit.edu> Our user always explicitly addressed the agent, prefixing queries with "agent,". So, "Agent, what is the best way to find out the history of Legos?" Nice simple queue for software to pick up on. Our user always asked the agent explicitly for the answer first. "What is the humidity in Paris today?", not "how can I find the humidity in Paris?". This points to an interesting thing about software agents - how much will an average user trust an agent to just supply an answer, how much will the user want to be taught how to find the answer? There's a tradeoff - if the agent can just give you the answer, then that's probably the right thing to do for many applications. But what if you don't have confidence in the correct answer? There's a similar tell-me/show-me-how tradeoff in traditional master/assistant relationships. But the goal is different with software agents. There's some things I never want to know how to do, I want my machine to just do it for me. There seemed to be two types of requests: questions about specific keyword searches on a web page "What's the best way to search for weather info on Paris?" general requests about information sources "where can I find information on Clinton?" Most problems reduced to a search engine (altavista) query. Some questions were answered with more organized sources like Yahoo but very few. In general, existing web sources for finding information are pretty good, they're just awkward to use. The agent could be very helpful if it had specialized knowledge about the quirks of each search engine. For example, what syntax to use to get an AND search on each engine. Or estimates of the completeness of different databases, how well different engines rank the responses, etc. There's a lot of domain-specific knowledge that could be encoded. A software agent could help the user enormously by scanning documents for keywords relevant to whatever question was currently being answered. Something as stupid as highlighting the word "Clinton" if the user was looking for Clinton information would be a big help. The agent shouldn't suggest too many alternatives, it wastes the user's time. The agent should just go with the best guess and have a graceful recovery mode.