next up previous
Next: The Measure of Up: No Title Previous: The Dignity Exchange

Prognosis

Are we spared the potential risk of a dignity exchange if it could be shown to be impossible to build computers that emulate humans? Many of us scientists have serious doubts about the ability to fully duplicate human abilities in machines. If we could prove that machines could not be given some crucial human ability, then would the argument be over? I think not. My reason is because the seeds of the dignity exchange have already germinated today, even though today's computers are vastly inferior to humans in many important abilities.

An intriguing set of studies was conducted by Clifford Nass, Byron Reeves, and their colleagues at Stanford University, and is described more fully in their book, The Media Equation (Reeves and Nass, 1996). Here's what Reeves and Nass found: If you take a classic test of social interaction between two people, and replace one of the people with a computer, then the classic results still hold. For example, if a person teaches you something, then you might tell them afterwards ``that was really great." If another person asks you how that teacher was, then you might say ``great." People tend to give slightly higher praise face-to-face. Now replace the teacher and the person asking for the evaluation with a computer: if the computer teaches you something and asks ``please rate this computer how it did as a teacher" then you might click on ``really great". If another identical computer asks you to rate how that other computer did, then you would tend to still click on something positive, but not quite as positive: ``great." In other words, you are nicer ``face-to-face" (face-to-monitor) than you are otherwise. The results of the human-human interaction still hold for the human-computer interaction. This, and dozens of other studies were done by Reeves and Nass, revealing that the classic results of human-human studies were maintained in human-computer studies. Reeves and Nass concluded that human-computer interaction is natural and social.

Now, imagine a situation where one person honors, esteems, or values the worth of another. If the situation is re-created, replacing the esteemed person by a computer, then Reeves and Nass's results indicate that it would be natural for the human to honor, esteem, and value the worth of the computer. These results hold today, even for non-humanoid machines.

Machines will not have to have emotions or other human-like abilities to threaten human dignity. Today's computers already intimidate and beguile, even without emotions or other crucial human abilities. Deep Blue's triumph over grandmaster Gary Kasparov rocked the world not because it was a great chess victory, but because a machine beat the best human chess player. The contest was billed as one of humanity against computerdom, despite that Deep Blue was incapable of appreciating the meaning of this contest and the magnitude of its victory.

Meanwhile, in the cubicles, libraries, and home offices around the world, we hear the mutters of frustrated users, blaming themselves instead of the machine, as if they are the poorly designed ones with inferior intelligence, and not the computer. One of the things we are trying to do is to teach computers to recognize the frustration of users, and it is not hard to find examples of people expressing their negative feelings.

Best selling manuals with titles ``for Dummies'' are purchased with an air of humored resignation. MIT's brilliant director of the lab for computer science confesses that even he can no longer understand how to work everything on his computer. It is no wonder that the average user has a growing feeling of inferiority toward computers.

Mild human fears are amplified by science fiction. Mary Shelley's Frankenstein remains a frightening scenario, despite that its ``technology'' is almost laughable today. A century after the story of Frankenstein, the Capek brothers introduced the word ``robot'' in their play ``R.U.R..'' The robots in the play wipe out the human race and become the next new species. Four decades later, in Kubrick and Clarke's 2001, we meet the computer HAL, who claims to be ``foolproof and incapable of error.'' Most people hardly noticed HAL's errors in the film, and most felt more sympathy at HAL's termination at the end of the film than they felt for the deaths of any of the human characters, despite that HAL's actions led to those deaths.

Issac Asimov acknowledged growing human fear of the power of robots, at least implicitly, in post-HAL ``The Bicentennial Man and Other Stories.'' Each of his robots is designed to obey ``The Three Laws of Robotics:''

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

How does a robot decide if the situation is a potentially harmful one? Not all situations are perfectly clear in the instant in which one must decide how to act. In humans, the emotion system is critically involved in split-second decisions, in determining saliency, and in value judgments. Subjective feelings, especially those of right and wrong, are powerful guides in decision making.

Asimov's robots could get into states where they could not reach a decision. The image is like that of Robbie the robot in the film Forbidden Planet, who when asked to point a gun at a human and pull the trigger, starts shaking and ceases to function.

Asimov's three laws illustrate the valuing of human life over robot ``life.'' Human preservation is placed above the self-preservation of the computer. The laws try to insure that computers respect and honor human dignity above machine dignity.

However, the laws are not infallible---one can propose situations where the computer will not be able to reach a rational decision to satisfy the laws and where, such as through lack of full information, the computer might bring about harm or allow harm to happen to a human.



next up previous
Next: The Measure of Up: No Title Previous: The Dignity Exchange



Roz W. Picard
Fri May 1 13:51:37 EDT 1998