I see the root meaning of conversation as being "to keep company with." A conversational exchange is a "disturbance" of equilibrium between two (or more) parties, and its resolution.
That disturbance can be occasioned by a question, by one party coming into the presence of the other, by proximity, by some kind of closure, as in eye contact. Integral to the initiation and maintenance of a conversation is the disturbance of inter-personal equilibrium, intra-personal equilibrium, and its restoration.
A simple example: You are alone at a bus stop. Another person approaches; you see them, they see you (The prior equilibrium is disturbed--Why is the person approaching? Is it a mugger? Or, just someone like you intending to wait for a bus?). There is eye contact as the other person arrives at the bus stop and stands there (The disequilibria or suspense as to what the approaching is going to do, which is resolved by their ostensibly approaching--not to attack you--but simply to wait for the bus). You say something like "Pretty nice day. Do you know if the 76 bus is usually on time?" (The stasis or equilibrium is again disturbed, this time by your asking a question.) The other person replies, "Yes, the service here is usually pretty punctual." (The equilibrium is restored by their answering your question.)
The primary issue in human/machine conversation at the interface is not how do people function (human psychology), but what must the machine be able to have/do to maintain a dialogue relationship (cognitive modeling) with its human user. Much of this is necessarily rooted in conversational conventions - the stereotypic things we do when speaking to others, including such straightforward and seemingly mundane things as facing the person with whom we are talking, breaking eye contact when we start to speak (an unconscious but otherwise clear signal that we wise to speak), look toward the things about which we are speaking, bringing whatever we wish to show off to someone forward and closer to them for a better look at it.
Our approach, then, is conversational interchange based upon the cognitive modeling of the requisite interpersonal skills and dialogue conventions required to the modeling of practical conversational skill, independent of subject matter. Let us turn to a consideration of the kinds of activities required to go after this goal.
Conversation: a mechanism whereby two parties are in dynamic, or static, equilibrium (may be important distinction between static or dynamic, here...)
One party disturbs the equilibrium of the other, who then acts to restore the balance, e.g., even in a "simple" exchange: (The total pattern of the exchange, overtly simple may even be quite complex...)
A: Hello! (says this out of tension/obligation, etc., to acknowledge the presence of the other (social convention, fear of embarrassment, whatever...)
B: Hello. (resolves tension by saying "Hello" back to A.) - to restore equilibrium, to complete the "pattern," as in the notion of a "conversational waltz" (Cf. Jeremy Campbell, Winston Churchill's afternoon nap: a wide-awake inquiry into the human nature of time. New York: Simon and Schuster, 1986.)
In case of declarative (command) phrases, e.g., in the program "Put That There" (PTT):
Human: "Put that (pointing)...there.
Machine: (is so constructed to be "reactive" to such input, i.e.., is set in motion by it to perform act which (ipso facto) will restore equilibrium...
Case in point: When people are somehow thrust together, and in a small room indefinitely, there is a mutual disequilibria set off by mutual presence, proximity, unavoidability of eye contact. (NB: this is an interesting point re self-disclosing systems: generally we tend to avoid eye contact as somehow "impertinent;" it is a way familiars or equals address one another.) The disequilibria is resolved by exchanging greetings. They resolve each others equilibrium.
Even when you log in on a computer - the computer, initially, is just sitting there - until you initiate an action, e.g.., logging in, - you disturb the computer's equilibrium by giving it commands which it restores equilibrium by carrying out the action with you, which merge/converge to some mutually satisfying resolution.
Notice that there are conventions in conversation, that people want to reach mutual agreement, ordinarily, i.e.., they implicitly signal they want to reach agreement, that they accept the conditions of the conversation. (Cf. Martinich, A. P. Communication and reference. New York: Walter de Gruyter, 1984; also Reeves, Byron & Clifford Nass. The Media Equation: How people treat computers, television, and new media like real people and places. Cambridge University Press/CSLI Publications, 1996.).
There are useful conventions, stereotypes, e.g.., young man and young woman at tables in an ice cream parlor use conventional signals--necessarily so, unless they want to risk being though "odd"--for "flirting," - mutual signals of willingness to meet each other. It's not unlike the conventions of a dance (Cf. "The Conversational Waltz" in Jeremy Campbell, op. cit.).
Thus, the conversation is a tool to meet mutual needs. In the human/computer, one engages the computer to gain one's ends. During the conversation the computer is achieving its interim sub-goals as well; indeed, it must achieve them for the human to achieve his/her ends.
Any portion of conversational exchange need not verbal; it can be via gesture. The important thing is the setting up of a disequilibria which impels the other party (who is susceptible to be responsive) to resolve it.
The other party is, as it were, poised and primed, willing to take on a complementary role; to engage in the conversation, and by this they adopt the mutual dance. Al this works, provided:
Otherwise, it degenerates in to a one-party "bothering" the other; failure to establish or maintain contact; destroys possibility of exchange.
In issuing multi-modal commands and in user/process exchange the dialogue is characterized largely by the use of the imperative mode: "Close that fileŠ", "Tell me about that," "What's the cost of those.." -- looking and/or pointing at the items while uttering the words.
People communicate face-to-face chiefly by speech, gesture, and looks. We utter words, point to this or that, make shapes in the air with our hands, glance here and there. We not only swap information in these ways but have something called a "conversation."
Could we possibly deal in these ways with computers?
Yes, we sit in front of computers, but interaction with them is scarcely "face-to-face." Typing on a keyboard is like writing a note. And using a "mouse" is more like using a screwdriver than gesturing.
Let's not get rid of these devices. They are too useful. The keyboard and mouse are by far the main means whereby people now work with computers. But dealing with computers currently is nothing like conversing with another person, occasional claims that some interface is "conversational" notwithstanding.
Nickerson (1986) noted that no existing system supports anything substantively "conversational." (This observation is still largely true.) He noted that while some features of conversational interchange e.g., a "two-directional, mixed initiative capability," may be desirable for many purposes, one might want other features to be less "symmetrical" than generally holds true in most inter-human conversations. (See: Nickerson, Raymond S., Using Computers: The Human Factors of Information Systems. Cambridge, Massachusetts, MIT Press, 1986.)
For instance, the computer must respond instantaneously while the human can take their time. The computer must tolerate imprecision on the human's part, while the computer must be precise. The computer must be able to do "super-human" things, like search rapidly though large databases, produce graphs instantly, and so on. Thus Nickerson suggested that a conversational model of human/computer interaction may be inappropriate, even undesirable, and that a preferred model is "...that of a person making use of a sophisticated tool." In this connection, Nickerson prefers the term "user" to that of "partner," as "user" implies the real asymmetry of goals and objectives that underlies peoples use of computers:
User is not a term that one would normally apply to a participant in a person-to-person conversation, but it seems like precisely the right term for the case in which one party to an interaction is a machine. (Nickerson, p. 149, italics in original)
But what might a "conversational" interface be like? And, would an interface like that be a good thing?
Our common understanding of what a conversation is involves several features. First, it's nontrivial. It's more than a simple exchange of pleasantries like "Hello," 'How's things?" "Not bad - Yourself?" And while information certainly be exchanged in a conversation, we sense that it involves more than a the simple information swap that goes when, say, we deal with a furniture salesman about prices and delivery, our credit card number.
The basic difference, we sense, lies in an exchange of ideas and opinions. With the furniture clerk, when we go beyond price information and start swapping reactions and feelings about French Provincial versus Danish Modern, then it's at least a "chat." If we begin to talk with the travel agent about great places for bed-and breakfast in Wales, and not merely fares from here to there, then we may pass over from a simple exchange of data about where I want to go and how much it costs and when does the train leave, to whether I really would like that tour, and how the agent love that route when she tried it last year. The rule of thumb then is the more we get into daring ideas and opinions, the more it is a conversation than mere information exchange.
Now, we don't always want to get into a deep conversation with store clerks and travel agents. We may want to take the information and go, and the clerk or agent feels the same way. That is, we "use" the other person, and they may indeed "use" us. We don't mistake them for a machine, but we don't "get personal" either. Further, getting into a conversation with someone implies a kind of bi-lateral willingness to get into an exchange with someone on a more-or-less equal footing.
But why converse? And why must gesture and looking be in there along with speech? Isn't speech input/output enough?
Nickerson elaborated some basic features and capabilities like "command confirmation", "undo command," "your turn" signals, and the like, aimed to make the system seem more "friendly" to the user, the counterpart of basic courtesy phrases that smooth our talking with clerks and furniture dealers. He refers to a fundamental asymmetry of goals and objectives between the partners in the dialog. Fair enough. I don't always want to get into a long and soulful chat with the store clerk or the bank teller either. But a conversation is radically different from either a batch of instructions you give to the service station attendant or how's-the-weather pleasantries.
A conversation is not a simple passing of information back and forth. It has a different goal: to exchange thoughts and opinions in order to gain a deeper understanding of some topic.
In a conversation, we do more than give signals to one another. We exchange thoughts and opinions. The conversation, as opposed to a simple transfer of information - which most of people/computer interchange is, includes clarification, exploration, elaboration of things and ideas, with the goal of getting a deeper understanding of that topic.
Is it possible to have with a computer the kind of intimacy and rapport that that sort of communication seems to imply? Again, a root meaning of "to converse" is "to keep company with." Conversing is more that just talking. It means a sharing of goals, a partnership, conviviality. These are not the terms we usually associate with the use of computers. If you are like most of us, your experience in dealing with computers has been far from convivial.
Can conversational communication work between humans and computers as well? Is that important? Is it a reasonable goal? Can it be done? Why would it be valuable? How might it become a way of interacting with computers?
It's important because it will open doors to computer power without our having to make big changes in the way we ordinarily operate. Many of us now comfortably use word- processing programs and simple spreadsheets. But most of us are not specialists with the time and dedication to master such passports to computational power as the "C" or LISP programming languages or operating systems like UNIX or DOS. So-called "expert systems," currently applied to medicine, geology, and configuring MIS systems, promise to become ever more powerful and useful. Yet that power may stay untapped if people find they can't talk to such systems in readily accessible, manageable ways. This means not just "natural language" in the sense of English-like phrases typed on key- boards, but natural dialogue: gestures, looks, and speaking.
Is it a reasonable goal? Well, work has been done suggesting that it is. Computers already have been made to combine speech input with pointing by hand to reference items on a graphics display, making moving some item from one spot to another as simple as saying "Put that...there." Further work by my students and I has shown how a computer might show off and tell about its database according to how the user shows interest in that database--as revealed by patterns of looking. Such breakthroughs, while modest in themselves, imply ways to make computers even more responsive to our words, gestures, and glances.
Thus, "conversation" goes beyond data transmission, information transmission. The conversation is a instrumentation that we as people use amongst ourselves to gain greater understanding. It is a mode of relating. It goes beyond data transmission, information transmission.
The conversation can be variously construed as supporting:
to converge on a deeper and more through understanding of some topic.
On this view, the conversation is an instrument through which a person, with someone else, explores opinions and ideas concerning some topic in order to gain a deeper understanding of that topic.
Examples might be a parent talking with a son or daughter about how they feel about applying to college, where they'd like to go, what courses to take. Or, two businessmen talking about whether it makes sense for their companies to merge. In such conversation, both sides learn more about the facts and figures pertinent to the merger. They also learn a lot about each other, what they are or are not interested in, what they reaction positively and negatively to. If the conversants were a young couple deciding whether or to be engaged, the conversation has some of the same flavor as that of the two businessmen but a with perhaps more emphasis upon gaining knowledge and impressions about the other than as facts about where they might live, how they could manage their jobs.
Thus the conversation as an interactive paradigm has a decidedly deeper function than what may be used to thinking of. It is two-way to be sure. It is not merely one side talking and the other side listening - though that might occur for a stretch, but a mutual give-and-take. It is "mixed-initiative,"--that is, both sides start topics, end topics, ask question, each side sharing in the "control" of the conversation. Like in a chess game or tennis match, either side takes initiative turn one after the other--but on top of that is who has the attack, and who is defending. That is, the locus of initiative passes back and forth, a second-order accompaniment on top of, if you, will, the steady exchange of moves.
In the realm of human/computer exchange, unlike the user arrangement that Nickerson describes, the human may not necessarily expect or demand that the computer reply instantly. Instantaneous response as such is largely beside the point. The computer is engaged in sampling your behavior, deciding how best to reply to what you seem to be doing. What you are doing is often, mostly, looking about probing, mulling, weighing. This takes place at a slower pace that simply issuing a barrage of fast question that you expect and demand the computer instantaneously reply to.
The expectation that the computer be precise while tolerant of imprecision on our part is to some degree suspended. The computer may often have to venture a guess, though an informed one, as to what--in the things being dealt with you are likely to be most interested in, or what is likely to be relevant. It takes stabs. It initiates this or that sally, especially during "lulls" in your activity. It is gathering cues and clues from your looking, pointing, and mumbling and weighing them, searching to formulate its next response.
Suppose it has present you up close with a picture of the Boston Common. Or the Alexander III bridge in Paris. If the machine observes you looking mostly at the Eiffel Tower in the picture's background, and you looking generally is de-correlated from the specifically graphic, it may reasonably infer that you are more interested in Paris, not the specific picture as such. The picture is, for you at this time, an occasion to reminisce about Paris, not to appreciate the artist technique.
Or, suppose your looking were active, correlated with the main lines and strokes of the drawing. A reasonable inference is that you are indeed interested in the drawing qua drawing, not as an associated "pointer" to something beyond itself. Then the machine might chat a bit about the drawing, its perspective and lines.
Suppose you asked the machine to tell you about 16th Century Spanish Architecture. Suppose that the programs has its own agenda follow, in the same sense that we have a "script" more or less in mind when we show someone about our living room. The program sets off on this agenda in a purposeful way; indeed, you have given it license to do so, in the same sense that you hire a guide to show you about a city. Thus, you set the high-level initiative for the system.
Suppose you begin however, not to look to where the system, on its initiative, requests you to pay attention; remember, with an eye tracking capability, it can sense where you are looking, and thus has feedback on how closely you are following its narrative. At an even higher level of the system, there could be a variable reflecting system assertiveness: is the system going to "relax" while carrying out its assigned chore of telling you about 16th Century Spanish Architecture, or will it be more "proactive."
Also, while at one level the narrating and showing program may push on relatively assertively from topic-to-topic, it may, within a topic, go rather slowly, having picked up--from the "intense" pattern of your looking that you are either quite absorbed or perhaps a bit slow on the uptake.
Thus there could well be a nesting of simple variables which reflect varying levels of: initiative, assertiveness, persistence, with specific routines periodically to adjust their levels as a consequence of how the "pace" of the ongoing exchange seems to be going. The human user could well have a sense that the system is "accommodating" or " got a mind of its own" or some sense of quality in between--all as the result of the program setting and re-setting within itself a small set of variable "set points" which it ongoingly interrogates to see how it ought maintain and modulate its own responsiveness and reactivity with respect to the responsiveness it senses from the human user.
As an instance, suppose the system became more slow in response to sensing the overall pattern of the human observer to be gradually becoming less dynamic. The total exchange would thus become gradually sluggish. On the other hand, suppose the machine instead picked up the pace, like an exercise leader prodding its flagging class. These alternatives represent real qualities of response from the machine. The experience of the human user is going to be quite different with either behavior trend on the part of the machine, perhaps sensing the machine as "stubborn" or "flexible". Perhaps two, probably no more than three nested personality variables might serve to lend to the machine a "personality" that is both flexible and adaptable to the interchange.
Are there limits to the range of possible and plausible topics "discussible" between a machine and a human? The late Sir Kenneth Clark commenting on his TV series "Civilization" said that he had to omit any significant coverage of law and philosophy--surely the stuff of civilization--because they were not readily "picturable." Even so, it is to a great extent possible to "manage" a dialogue concerning abstract topics by the strategy of assigning ideas to definite spots in a shared space. Consider an Economics professor lecturing his class: "The current administration favors a free trade stance on the one hand (gesturing to the left), but has on the other hand (gesturing toward the right) suggested graduated tariffs on selected goods. But can that (gesturing to the left) work in a world trade situation where...".
A speaker will often tie even abstract ideas to concrete locations: "On the one hand (indicating some spot in space) there is the notion of . . . and on the other hand (gesturing to another spot in space) there is ... ". "Now, I tend to agree with that (glancing at one of the spots). . ."
Overall, though, have to ask ourselves what topics does it make sense to discuss with a machine? Sales figures, architecture, even history, with stress on events rather than upon interpretation, yes; but values, morals, beliefs, probably not.
The eyes can work in plural reference, both in terms of the speaker (sender) originating a message to others, and in terms of the receiver of the message being able readily to interpret the intentions of the sender.
Consider an army Sergeant confronting a group of recruits. "OK," he says, "you guys fall out over here for clean-up detail." As he speaks, his eyes pick out a sub-group of the men before him--say eight out of the total of seventeen. The Sergeant doesn't look each individual in the eye in order, one by one, to pick out the ones he intends.
Rather, his eyes fix one or two men at the bound of the "grouping" he intends, and then his glances dart a few times amidst the group of eight. The men listening to him are able as well to see his--the Sergeant's--eyes, and comprehend in a complimentary sense exactly which set of individuals he intends.
The acts of the Sergeant are informed by a natural, spontaneous "culling out" of the group he intends by his eye, first fixing the bound or edge of the sub-group he intends to designate. There may by some Śnatural boundary" in the group that both prompts and assists him is setting the boundary as he did: perhaps the sub-group of eight are somehow "clumped" together within the content of the seventeen; they may be milling more or less together within the larger group; they may all be in tee-shirts, while all the others are wearing fatigue shirts. Or, they may have, as a group, just re-joined the larger group--and thus retain a present sense of being a "subgroup" within the larger group, and so perceived by the other 9 men.
Thus, in a way complimentary to the Sergeant's perception, the recruits--both the ones designated and the ones not--have a sense of "belonging together" on some basis, or if part of the other sub-group, readily see themselves as individuals not belonging to the group the Sergeant wishes to designate.
The key point is that the Sergeant on some basis "sees" a certain sub-group of eight men out of the seventeen as somehow "belonging together," and the men--both those in the sub-group and outside the subgroup--also "see" the eight as belonging together. The Sergeant generates his actions in the light of a mindset that sees certain of the men belonging together, and the men, both those to be selected and those not, in a complimentary way "see" who is to be selected and who not.
The prospects for realization will involve insights from the growing body of work in many disciplines, such as computer-supported cooperative work (CSCW) to help orchestrate cooperative focus on task.
Relevant, too, would be insights from co-ordination science, to re-analyze tasks in light of possibility of interactive, real-time multi-modal communication with agents or groups of agents.
Also of anticipated worth would be insights from artificial intelligence work in such areas as "interface agents."
User acceptance would, I believe, depend heavily upon unobtrusiveness in eyetracking, gesture tracking. Apparatus for eyetracking and gesture tracking has, up to now, been clumsy and obtrusive for the most part. However, the progressive miniaturization of electronics generally, as well as in electromechanical devices, bodes well for the eventual emergence of eye and gesture sensing apparatus which will be both robust and acceptable to the general user.
The bottom line will be, for the user: Is it fun to use? Do I like using it?
What's appropriate to talk about with computers? Possible to talk about? Plausible to talk about?
One contrast that has been noted between inter-human and human/computer conversations is that between humans no one expects the other to know literally everything about some topic, but the human "user" expects the computer to be omniscient, and is quite irritated when it proves otherwise. This attitude toward the machine may well be inherent in the user/tool relationship between the human and the machine that some researchers postulate, and ideally would lessen as a more conversational relationship developed.
In the redefinition of the human/computer relationship that a conversational context would imply, the machine may occasionally NOT know all about some topic, but the human user would tolerate the machine's admitting so and saying it would consult some "information network" in an attempt to find out, just as we would tolerate a human acquaintance to consult their notes or a reference book.
One can imagine three levels of interaction, the range of levels characterized by an increasing sophistication of interaction:
This initial level of dialogue is characterized largely by the use of the imperative mode. It corresponds most closely to the view of the human/computer relationship as being one-sided, asymmetric, with the role of the machine not as "partner" but as "tool." Example of exchange at this level are: "Close that file...", " "Open that folder..." - looking or perhaps pointing at the items while uttering the words.
Practical applications at this level might be, e.g., the stock- trader workstation, where the trader has several screen set about him/her, any one of which is "eye-addressed" when talked to. The speech/gesture/looking commands might include such as "What is IBM at?" Sell Texaco," and the like.
We may note that the logical combination of inputs from all three modes can help both attention getting and insuring positive control over crucial actions. The machine does an AND on all 3 modes when positive actions are the issue: for example, a sell or buy order to be executed requires that the human be looking AND pointing AND saying "sell" before the machine will act.
This is just about what takes place when one human communicates across a noisy room to deliver just such a message to another human; the human on the receiving end will ordinarily demand the convergence of the three modes, unconsciously, as a matter of course. When the simple getting of attention or alerting is all that is at issue, or the consequences of an action or not fatal or irrevocable, then the inclusive OR of inputs will do--i.e., input on any channel is sufficient.
At this level of interchange there is more to-and-fro, a more mixed-initiative situation holds, but the sense of the status relationship between human and machine is, as with simple commands, is still that of user/tool.
For example, in an electronic publishing context, consider the user formatting or laying out a page. There would be more interaction with the machine, as the machine would have in itself more "intelligence" about spatial layout of the page. Another example would be that of interacting with an expert system. The dialogue exchange between the user and machine in the typical expert system situation is that of a user (human) interacting with a process (the expert system shell, or the system itself).
This last level represents the greatest research challenge: to achieve full conversational interchange between human and machine. This is the conversation as an interactive medium, with its own internal coherence and consistency with both conversants tuned into common theme. This inner coherence, which is ordinarily taken for granted; only noticed when things break down, as in a comic Abbott & Costello "Who's on first?" type of interchange; or conversing with a schizophrenic, or with a group who speak a foreign language.
There is also the issue of whether we conceive of ourselves talking with or to the machine. It matters whether we think of ourselves as talking with the machine as an entity, or with the machine simply as a neutral conduit for the ideas of others (humans) or as simply a presentation medium.
In this latter regard, the machine as conversant can stand as an intercessor or surrogate for a person, or rather for a "point of view." Consider a lawyer arguing the points of a case with a computer; the aim is to exercise and critique the real person's argument, with the computer acting to marshal and articulate the logical steps of the opposing view. The computer is not expressing its "personal" opinion.
This is indeed the stance taken toward the "expert system," wherein, for example, the expertise of some medical specialist has been extracted and set up as a knowledge base. When someone now interacts with that expert system, are they interacting with the doctor or a machine?
Probably, the sense of it is interacting with the stored expertise of the doctor as implemented on the machine. But, in any case, the sense of it is surely not one of consulting the machine in the sense it is the microchips and circuitry that holds the medical content and wisdom. Only the most technologically romantic would begin to conceive of such an interchange as holding a dialogue with a machine on the other end.
What is goal in this? Not a Turing test, not an issue of equating humans and computers, nor imputing "consciousness" to machines; rather, aim is, through machines sensors and intelligence to accommodate human conversational interchange at the interface.
What capabilities can be built into machines? Learning from experience? Can a machine possess "commonsense"? What is the relation between "mind" and "experience"? Do computers need bodies?
Is there some "metaphoric capability"--understanding metaphor, generating metaphor--that only people possess which liberates them from a unrelentingly literal understanding of language? Will a machine be able to make sense of such expression as "He was beside himself," "I'm on top of the situation," She's burning with ambition" as easily as people do?
Can the computer understand the content of the discussion, that is, "know" what is going on inside the "box" as it were? Perhaps. Perhaps it can know it in a "semantic network sense," that is, words about words about yet more words. In concrete reference, the terminus of a chain of reference is some thing, process, or act. In a semantic network, the terminus of chain of reference is a symbol. The meaning of a symbol to machine is yet more symbols; the meaning of a symbol to a human is a meaning.
Are computers "people"? Are there, will there be, "computer "rights," as in "animal rights' and "human rights"?
Are emotions as the hallmark of humans? What about computational theories of the emotions?
Is exhibiting "personality" the same as being a person?
Is it more plausible to say that a computer can have opinions ("That music is close to Bach...") vs. to say they can have preferences ("I like Bach better...")--the latter implying an inner cognitive life?
What about winning at chess vs. enjoying chess? People can do both, both can a machine do only the former?
(In the realm of computer chess, we see ever more articles celebrating how this or that rated master is defeated by such and such a computer chess player. Indeed, the programs have become more and more efficient at defeating humans. But did they enjoy the game? That to me seems at least as critical as winning. Perhaps the human creators of the chess program enjoy the fact of their brainchild's success, but did the machine enjoy the kick of insights, the tension of gambits and gambles, the suspense of bluffs?)
You shoot a computer: is it property damage or murder?
Is "mind" puzzle that may be eventually and exhaustively solved, or an ultimate mystery?
I do not share the conviction of some in computer science that computers are people are on the same existential plane. I believe that with the most developed machines, if we were to shoot it with an AK-47, the charge is property damage, not murder.
The Turing test is not sensitive in its criteria to the real differences between humans and machines. The former have consciousness (self-awareness), conscience, and an inner spiritual identity or "soul", and the latter do not.
People share consciousness and intelligence with animals, though in a different degree. The capacity for abstract, symbol-based thought may or may not be unique to humans; certain animals may possess such, though not computers, which--regardless of how complex--may push tokens about but lack real thought.
Humankind share with matter and organic life the physical, chemical, biological orders, but mind reaches an order beyond that; that is the spiritual order. That is the order of values, of moral choice--the moral sense of mutual responsibility for one another.
I certainly share the optimism that computers will advance in complexity and power. However, I do not share the materialist conviction that people are "computable." My own personal belief is that each and every human mind is, literally, a unique miracle and ultimate mystery.
Something superficially resembling "consciousness" may someday be programmed into machines, but--however complex and clever--it would nonetheless be like Svengali as played by John Barrymore in the old 1930's movie Trilby finally realizing that, while he could hypnotize his beautiful young protegee to sing beautifully, his attempts in hypnotizing her to speak of love for him "...is only Svengali talking to himself..."