Human-machine dignity exchange involves comparing the dignity of humans to that of machines, and asking which is greater. This is a disturbing idea, which we might try to avoid by arguing that the two do not need to be compared. A computer is good for ultrafast logic, accurate numerical computation, and logical reasoning. The human excels at slow but highly associative thinking, analogical reasoning, creative thought, planning and evaluation of goals, and social skills. The worth of all these things is high, and as long as the two sets of abilities remain essentially complementary, then man and machine do not compare. One need not worry about their relative dignity; both have their own value. Machines complement man, and man complements machines. However, as we build computers to interact better with us, then we build computers that tend to be more like us. The distinctions between computers and us blur, and the complementarity argument fails.
Comparison invites a measure. But human dignity is a dangerous thing to measure. The worth of humans has been scaled by the color of their skin; elevated according to education, beauty, and notoriety, aggrandized in excessive compensation packages for CEO's, inflated by populist appeal of athletic and acting ability, discounted in the twilight years of adulthood, insulted in slavery, ignored in the Holocaust, and declared irrelevant in abortion. Less worth or desirability is attributed to those who are average or below average, those who occupy positions of unassuming service, those who are infirm or weak, those who have suffered loss of their abilities from a tragic accident, those who are terminally ill, those who are not self-sufficient, and even those who are none of the above, but who are merely unwanted or unappreciated by someone arrogant or powerful.
If we look closely, we find something else very unsatisfactory about human measures of worth: they mark each and every one of us as having less worth at some point in life.
Another problem is that human measures of dignity depend on viewpoint. The woman with an unwanted pregnancy might deny any worth of the developing human in her womb, while a couple who has tried for a decade to have a child might give everything for the privilege of raising that same fetus. Human measures of worth depend on who is doing the measuring. The problem does not go away at birth. Hitler measured Jewish worth as nil, even declaring Jews as not human, while Jews and Gentiles of conscience recognized that the value of each Jewish life was (and is) inestimable. The hideous evil of Hitler's measuring stick is clear today, and yet attempts to measure human dignity reappear with each generation.
I was told that the thinking that led Einstein to formulate his great theory of relativity was based on a discussion with a rabbi about the symmetry of the golden rule. Einstein wondered if what is true should not vary with viewpoint. I would like to suggest that this property should apply to true dignity as well. Inherent worth should be intrinsic; it should not change depending on the viewpoint of the observer.
Over the years man has tried in vain to define human worth himself, without appeal to anything greater than man. We have risen as the center of a heavenly solar system, only to be blown about as dust in an expanding universe; we have ascended to a state of emotional and cognitive development higher than all animals, and fallen to depths of moral failure deeper than any animal. We have elevated ourselves as the creators of powerful machines able to marvelously augment human forces and reach, and we have been humbled as frustrated typists in front of a so-called friendly computer. The same criterion that elevates our feeling of human worth today can crush it tomorrow.
Let me tell a short story to illustrate the problem of judging human worth. One doctor asked another, ``About the termination of a pregnancy, I want your opinion. The father was syphilitic, the mother tuberculous. Of the four children born, the first was blind, the second died, the third was deaf and dumb, and the fourth also tuberculous. What would you have done?'' The doctor said that in this case he would have terminated the pregnancy. By human measures the child would probably amount to nothing but increased expenses, stress, and strain. The parents would probably die young and the child would end up adding to the burden on society. Fortunately, this story took place long after the life of this child. The child happened to be Beethoven.
Now, I tell this story not to start a debate about abortion. That is not our topic here. My point is that our viewpoint in judgments of value and worth is a limited viewpoint. All the medical training in the world could not equip a doctor to accurately judge the value of a pregnancy. We are an unworthy judge of worthiness.
Variations on this story are enacted every minute, and in the future might include a new alternative. The worth of having a child might be weighed against the worth of having a computer. Future computers might play, giggle, ask interesting questions, and even be soft to snuggle up against. (Have you seen the new electronic stuffed Barney?) If desired, they could undergo physical morphogenesis from a robotic doll to a clumsy toddler, perhaps leap over adolescence, and arrive finally at a humanoid adult. They would not have to be potty-trained. They could grow with us, challenge us, make us proud, and ultimately inherit our money.
Computer vs. child. Might not the computer be a safer bet to have greater worth? Might its handicaps not be more easily fixed, and less burdensome on the parents? If it failed the parents in some way, then it could be exchanged for a new model. And so on. Here is my point: if we humans establish a list of criteria for evaluating worth, then, I think that eventually we can design a machine that fulfills these criteria.
The hazard is not merely one of choosing the wrong criteria of worth, but is much more profound. The danger lies in assuming that we could devise any such criteria. When we take it upon ourselves to evaluate inherent worth, then we do so with a mind that is not omniscient, with a heart that is self-centered, with ears that hear but for a moment, and with eyes that only see in part. We are inadequate gatherers, thinkers, and feelers of all there is to know. Different people at different times will come up with different estimates of your worth, and at some point in your life, these estimates will imply that you don't have worth.
There is a way out of this relativistic nightmare, and that is to insist upon a measure of dignity that is invariant to viewpoint, race, gender, age, and anything that violates the meaning of dignity as ``inherent'' worth.
There is at least one solution to the problem of finding such an invariant measure, although I have never heard it presented in quite this way: One can assume that dignity is given to all humans, intrinsically, by their Maker.
With this assumption, humans have dignity not because of their goodness, creativity, intelligence, beauty, contributions, capabilities, or any other human measures of value or worth. Ultimate human worth is the mark of an outside endower of dignity, given to every human. This mark is not earned by us, and not controlled by us. It is given to us for what we are, and not for what we have, do, or produce. It is imprinted upon every human, young and old, healthy and infirm, even theist or atheist.
Now, can we give such a mark to our machines? Can we play the role of dignity endower for our creation? Yes; but the dignity we can give is no greater than what we have to give.
Does the action of endowing our creation makes us into deities? No, for we lack omniscience, perfection, and many other attributes of a deity. I am reminded of the poster one of my colleagues has on her wall: ``There is a God, and you are not him.''
We are not deities, but we are in the role of a maker, and this entails responsibility for what we make. We may try things like giving Asimov's laws to our creation, to insure that computer worth does not threaten human worth. But we must recognize that our creation, like its makers, will not be perfect, and that things will go wrong. If we set our creation ``free,'' and it chooses to deny the laws and values we gave to it, then we will have to live with the consequences.
If we give the human imprint of dignity to our technological creation, together with a sense of its worth, and if we give it freedom, then we must be prepared for one more thing. We must be prepared for it to possibly deny our worth. Machines with the flexible kinds of reasoning facilitated by an emotional system could arrive at their own beliefs, and could feel very strongly about these beliefs. We run the risk not only of building machines to which humans might accord greater worth, but machines that will declare themselves to be of greater worth.