I recently wrote a book, ``Affective Computing,'' in which I describe how computers can be given certain emotional abilities, and why it is important in many cases to do this (Picard, 1997). It is certainly not an obviously good thing to do, unless you believe we should make machines in our image. That is not my motivation, however. My reason for building computers with affective abilities stems from new understanding about the importance of such abilities.
Emotions are no longer understood as something that is merely at work making people behave irrationally. Recent neurological and psychological findings show that emotion plays a crucial role in a huge number of positive influences, even regulating and guiding rational behavior. When emotions are effectively disconnected in a person, the result is not a super-rational being, but a severely impaired one. Patients with brain damage that effectively disconnects their emotions behave less rationally, not more (Damasio, 1994). The Star Trek android ``Data,'' with his emotion chip turned off, does not accurately portray what would happen if humans had no emotions. If the human ``emotion chip'' is disconnected, then a person's rational behavior is impaired.
When I speak of ``emotion,'' I am not speaking only of a single state such as anger or joy, but also of a complex mix of internal signaling mechanisms that involve both the body and mind, helping us function in a reasonable, balanced, and healthy way. Emotion includes not only highly visible signals such as facial expressions, but also unseen internal signals that guide thoughts, motivations, planning, decisions, learning, and memory retrieval. Emotion works powerful influences behind the curtain of human performance.
I direct research focused on giving computers the skills of emotional intelligence, especially the ability to recognize and respond to human emotions. For example, computers could change their behavior in a beneficial way when a user is frustrated at the machine's actions. If a computer frustrates a user, then it could see that, and try to improve its behavior. If it confuses a person, it could offer another explanation.
Depending on what the computer will be used for, it may need various subsets of human-like emotional abilities. The Macintosh has been smiling at people for years, without having any other emotional skills. To say that a computer actually ``has'' emotions must be qualified, as emotion is not a single simple thing that you do or don't have. What we call ``emotion'' has many forms and is comprised of a variety of mechanisms.
I won't take time to go into all the different known components of human emotion, and how they might or might not be implemented in a machine. But let me be clear that it is not an all-or-nothing situation. Some mechanisms of emotion can be given to machines now, without giving them an emotional system that is equivalent to the human emotional system.
There exist computers and robots today that have emotional behaviors, and that have some so-called ``cognitive emotions.'' However, no present machine has an emotional system that comes close to rivaling that of humans. Nor do we know enough about how the human emotional system works to duplicate it in machines.
Could computers ever have emotions like ours? In my book I outline five components of human emotion that computers might be given.
There isn't time to explain all of these here, but I want to point out that an emotion system involves many things. Some components of emotion are fairly straightforward to implement. However, others are elusive, and require breakthroughs that may or may not be forthcoming.
One example of a human emotional ability that I doubt computers will ever be able to have in the same way people do, is what we call ``emotional experience.'' This is what most people think of when they think of what they are feeling right now. If we dig deep into the underpinnings of this, we find not only pre-conscious and conscious awareness, but also signaling mechanisms that provide what are called ``subjective feelings.'' Subjective feelings include the feeling of knowing something (before you have retrieved it), the feeling that something is not quite right (that precedes knowing what that something is), and the intuition that something is right or good. Subjective feelings play a key role in knowing right from wrong, good from evil. We don't know how to implement these in computers. This may change. Even if subjective feelings can be fully implemented in computers, the experience of feelings will be different for computers and for humans. Emotional experience for humans is in large part bodily, and as long as humans and computers have different bodies, we won't be able to duplicate how each other feels.
Over a decade ago, Sherry Turkle asked people what the difference was between computers and humans (Turkle, 1984). The topic of emotions came up repeatedly. Younger children were apt to attribute the expression of emotion to machines and to cite this as a reason that machines are alive and like people. Older children used emotion to argue for the opposite conclusion: people are unique because they have emotions. Some adults have expressed to me their concern that giving computers emotions is not just a step toward making computers more like us, but that it may be the final step.
If computers become just like us in their abilities, then will they deserve human dignity? Will dignity be a prize awarded to the first computer that can appreciate it?