N E G R O P O N T E

Message: 16
Date: 10.1.94
From: <nicholas@media.mit.edu>
To: <lr@wired.com>
Subject:

Sensor Deprived

Proof of Presence
When it comes to sensing human presence, computers aren't even as talented as those modern urinals that flush when you walk away. You can lift your hands from a computer's keyboard (or even between keystrokes) and your computer does not know whether the pause is for momentary reflection, or for lunch. We give a great deal of attention to human interface today, but almost solely from the perspective of making it easier for people to use computers. It may be time to reverse this thinking and ask how to make it easier for computers to deal with people.

A recent Media Lab breakthrough by Professor Neil Gershenfeld solves a range of user interface problems with a few dollars of hardware. A varying electric field induces a small (nanoamp) current in a person that can be measured to locate the person in the field, making it is possible to build smart appliances and furniture that remotely and unobtrusively locate fingers or hands in 2-D or 3-D, bodies in chairs, or people in rooms.

Another way for computers to sense human presence is through computer vision - giving machines the ability to see. Companies like Intel are now manufacturing low-cost hardware that eventually will lead to an embedded video camera above the screen of almost every desktop and laptop computer. This makes it possible for humans to telecommute and to collaborate visually from a distance. The computer could use that same camera to look at its user.

Furthermore, machine vision could be applied to sensing and recognizing smiles, frowns, and the direction of a person's gaze, so that computers might be more sensitive to facial expression. Your face is, in effect, your display device; it makes no sense for the computer to remain blind to it. I am constantly reminded of the tight coupling between spoken language and facial expression. When we talk on the telephone, our facial expressions are not turned off just because the person at the other end cannot see them. In fact, we sometimes contort our faces even more to give greater emphasis and prosody to spoken language. By sensing facial expressions, the computer could access a redundant, concurrent signal that enriches the spoken or written message.

Of Mice and Men
A mouse is one of the most absurd input devices. "Mousing around" requires four steps: 1) moving your hand to find the mouse, 2) moving the mouse to find the cursor, 3) moving the cursor to where you want it, and 4) clicking or double-clicking the button. Apple's innovative design of the new PowerBooks at least reduces these steps to three and has the "dead mouse" where your thumbs are anyway, so that typing interruptions are minimized.

Where mice and trackballs really fall apart is in drawing. I defy you to sign your name with a trackball. This is where tablet technology, which has been moving more slowly down the cost-reduction curve, plays an important role. Nonetheless, few computers have a data tablet of any sort. Those that do present the problem of situating the tablet and keyboard, both of which compete for centrality, near the display. The clash is usually resolved by putting the keyboard below the display because only a few people touch type.

High-Touch Computing
The dark horse in graphical input is the human finger. This is quite startling, considering the human finger is a device you don't have to pick up. You can move gracefully from typing (if typing has grace) to pointing, from horizontal plane to vertical. Why hasn't this caught on? Some of the limp excuses follow:

- You occlude that which is beneath your finger when you point at it. True, but that happens with paper and pencil, as well, and has not stopped the practice of handwriting or of using a finger to identify something on hardcopy.

- Your finger is low resolution. False. It may be stubby, but it has extraordinary resolution when the ball of the finger tip touches a surface. Ever so slight movement of your finger can position a cursor with extreme accuracy.

- Your finger dirties the screen. But it also cleans the screen. One way to think about touch-sensitive displays is that they will be in a kinetic state of more or less invisible filth, where clean hands clean and clammy ones dirty.

The real reason for not using fingers is, in my opinion, quite different. With just two states - touching or not touching - many applications are awkward at best. Whereas, if a cursor appeared when your finger was within, say, a quarter of an inch of the display, then touching the screen would be like the multi-states of a mouse click or data tablet. With such "nearfield" finger-touch, I promise you, we would see many touch-sensitive displays.

Eyes as Output
Eyes are classically studied as input devices. The study of eyes as output is virtually unknown. Yet, if you are standing 20 feet away from another person, you can tell if that person is looking right in your eyes or just over your shoulder - a difference of a tiny fraction of a degree. How? It surely isn't trigonometry, wherein you are computing the angle of the other person's pupil and then computing whether it is in line with your own gaze. No. That would require unthinkable measurement and computation. There is some kind of message passing, maybe a twinkle of the eye, which we just don't understand.

We constantly point with our eyes and would find such computer input valuable. Imagine reading a computer screen and being able to ask: What does "that" mean, Who is "she," How did it get "there?" "That," "she," and "there" are defined by your gaze at the moment, not some clumsy elaboration. It makes perfect sense that your question concerns the point of eye contact with the screen and, to reply, the computer must know the precise point. In fact, when computers can track the human eye at a low cost, we are sure to see an entire vocabulary of eye gestures. When that happens, human-computer interaction will be far less sensor deprived and more like face-to-face communication, and be far better for it.

Next Issue: Digital Etiquette

[Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next]

[Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.10 October 1994.]