Many projects at the Media Lab require embedded control and communication capabilities. The requisite level of these abilities vary among projects, from 1,000 to 1 billion operations per second, and with data channels from 100 to 100 million bits per second. Interesting systems range from sensors on the body which dribble low-rate data streams to wearable reality mediation and augmentation applications which handle real-time video streams. To the present, these applications have been implemented on an ad hoc basis, using commercially available embeddable processors and wireless data links. The application-level protocols have also been developed on a per-project basis. The end result is a collection of thought-provoking demonstrations that illustrate the potential of an environment in which objects are digitally enabled, and a suggestion of what might occur if all of those objects were able to communicate.
This and other examples point out the need for reusable software (and hardware) components to enable the construction of Smart Spaces, Things That Think, and Wearable Computers that can easily interoperate to provide a platform for a new generation of computer applications. This is addressed in part by the Hive project, a TTT toolkit under development. So far the Hive project has concentrated mainly on software, to build a working abstraction of physical systems and their digital shadows.
Very generally, thinking things must be able to communicate with each other and their environment. To say that these devices must somehow share a common protocol for communication is to merely scratch the surface of the problem. Such a protocol must be flexible enough to convey structured data, ranging in complexity from sensor readouts to behavioral extensions. It must also take advantage of the strengths of the communication medium present at any node of the system at large.
As trite as it may seem, the truth is that wearable computers are in part a fashion statement. Their widespread acceptance is not likely to come about until they can disappear into clothing, for example.
Wash-and-wear multilayer electronic circuitry can be constructed on fabric substrates, using conductive textiles and suitably packaged components. Fabrics are perhaps the first composite materials engineered by humanity; their evolution led to the development of the Jacquard loom, which itself led to the development of the modern computer. The development of fabric circuitry is a compelling closure of the cycle that points to a new class of textiles which interact with their users and their environments, while retaining the properties that made them the first ubiquitous "smart material". Fabrics are in several respects superior to existing flexible substrates in terms of their durability, conformability, and breathability. The present work adopts a modular approach to circuit fabrication, from which follow circuit design techniques and component packages optimized for use in fabric-based circuitry, flexible all-fabric interconnects, and multilayer circuits. While maintaining close compatibility with existing components, tools, and techniques, the present work demonstrates all steps of a process to create multilayer printed circuits on fabric substrates using conductive textiles.
Why do I mention washable computing? It was my Master's Thesis work, but more importantly, it pointed out that components that inhabit clothing will have a small number of connections to other components, and that it makes sense to think washable computers as groups of small processing elements connected by a medium-bandwidth network. More on this later. The next step is to go down several orders of magnitude in scale.
In a broad sense, we're proposing to create smart matter from the top down.
Nanotechnologists have correctly realized that intra- and inter-system communication is a crucial feature of any non-trivial nanomechanical design. For example, in Nanosytems (Drexler 1994) an acoustic communication protocol is proposed to allow non-local transmission of control information to drive nanoassemblers. Implicit in this is the subsidiary need for an efficient static representation of structures to be assembled, which in the most general sense may be considered as procedures to be executed by assemblers.
While construction and control on the nanoscale is one of the grand challenges of the present, a more feasible first step is to experiment with the control of individual small particles of matter. Many interesting applications stem from this basic ability, one of which is the proposed focus of my Ph.D. thesis: to build a macroscopic particle trap capable of suspending a particle of approximately the size and mass of a dust speck, and by observing the changes in the particle's orientation and position to infer the acceleration and rotations acting on the particle's inertial frame. Ideally, such a trap would be only an order of magnitude larger than the particle itself, yielding a six-degree-of-freedom inertial measurement unit occupying no more than a cubic millimeter of volume.
Traditionally such particle traps are constructed by enclosing a volume with electrodes carefully shaped to provide particular boundary conditions (hyperboloids, in the case of a Paul trap) for the electric fields within the trap. The voltages applied to these electrodes are then varied in a manner that effects a rotating electric field in the trap. The rotating field is usually shaped to present a saddle-shaped potential to charged particles, which for certain parameters acts to keep a charged particle confined to a small volume. A simulation of the principle is available to browsers of this page.
Once we can manipulate a single particle in this fashion, we can then devise ways to manipulate several at once to assemble multi-particle structures and to probe inter-particle reactions. The next step is then to replicate the structure to build microparticle assembly lines. An intriguing application of a massively parallel contactless micromanipulator would be a sort of printer capable of exiquisite control of the printing process -- it would, for example, be able to probe the surface thickness, and deposit precisely the right amount of toner. It could also probe the surface's electronic properties and deposit material accordingly to construct electronic structures.
By exercising the option of massive scalability, we can raise the particle trap from a humble inertial measurement unit to a sort of universal microconstructor. In order to achieve this scaling, however, we have to shrink and localize the control mechanism for the trap so that it essentially defines the volume of the trap. At this extreme, what we have is a packet-switching network where the packets are small clumps of matter.
Efforts in amorphous computing cut right to the chase and posit a sort of ``paint'' whose grains are small computing elements. The main focus is computation, not manipulation.
Let's do a little calculation. Looking forward to a feature size of 0.1 micron and a grain size of 1 mm^2, one has 10^8 available features on the grain, or on the order of 10^6 transistors (conservatively estimating that each occupies a 10x10 grid. This choice of computational substrate effectively does away with conventional computer architectures, and requires that issues of interconnection, routing, packaging, and power distribution be completely reexamined.
Now consider the less radical proposal that we do away with the monolithic computation base that wearable computing usually embodies. Wearable computers are often single-user desktop PCs repackaged in a little box with a head-mounted display and a handheld input device. The main reason for this has been the availability of development tools associated with commodity operating systems (such as Linux and Windows 95).
The main reason to choose a commodity OS is that it sidesteps the need to reinvent system services (e.g. filesystems, display drivers, memory management facilities, network support). While some of these operating systems are better-suited to embedded applications than others, they bring along several levels of baggage, ranging from computational metaphors (several processes sharing time on a central processor and its I/O services, with little or no process mobility among network nodes).
Another metaphor that the monolithic operating system brings with it is the notion of from system code, usually locked away from the users who neither want nor need to see it. While this seems at first to be a contrived example, consider that the user of a wearable computer must be able to fix and modify their machine. But if processing, storage, and interface systems are to be distributed across a user's body and physical environment, the monolithic OS becomes an unjustifiable expense.
As a concrete example of this, consider the need to distribute file and display services in a wearable computer. Instead of running high-speed busses around in clothing to maintain a commodity interconnect why not instead split up the system services so that processing is localized? For example, a disk drive should present an object interface that allows the transfer of objects to and from a persistent store, rather than a device-level interface such as SCSI. It would be far better to ask the disk drive to fetch filesystem objects and grep the contents of files at the drive rather than far away from the data at a central processor. The benefits are obvious: simpler systems that consume less hardware, less code, and ultimately less power. There are also fewer points of failure.
Similarly, a display device should present some high-level, object-oriented protocol such as PostScript or OpenGL rather than a pixel-oriented protocol, such as Xlib or generic VGA. This decentralization of display control reduces the amount of rework necessary to introduce new hardware to an existing environment. More importantly, it cuts down on the long-haul (head-to-toe) bandwidth necessary to support a display.
This is also applicable to computing spaces. For example, we are considering how to build lightweight network displays and input devices, so that users will only need to have a display, keyboard, and mouse to take advantage of computing resources elsewhere on the network. This is a simple idea that so far has come to naught because most network terminals have been slow, diskless workstations and the network protocols they use (e.g. X11, NFS) have been inconcise and downright wasteful of network bandwidth.
We consider building a generic modular architecture for several reasons. Among these are the desires for flexibility and reusability of the system (build-once hardware), simplicity (only one sort of node has to built), upgradability (designs can be almost trivially retargeted to faster, denser, lower-power devices as they become available, without changes to the system architecture), and encapsulability (each node can be potted away in thermally conductive epoxy with only its connectors exposed, offering high durability).
Consider a system composed of several generic logic nodes, with power distribution network and a narrow, fast interconnect between nodes. Suppose that each node contains a generic logic element (e.g. a 40k gate FPGA), some fast local memory, and several I/O connectors (with a branching factor of 2 or 3 at each node, for flexibility).
First of all, a network architecture must be imposed on this structure. Each node starts out unprogrammed, so the interconnect must permit the system to be configured from scratch. This configuration and testing bus is standardized (JTAG boundary scan) to use only five pins to program all system devices in a daisy chain. Although it would be difficult to ensure that all the nodes of a plug-together network explicitly form a daisy chain that adheres to the JTAG specification and passes through all nodes, it is simple to devise a network discovery and configuration worm that explores the system topology at startup (and on the fly as nodes are added and subtracted from the network).
An interesting contraint is imposed by the physical layout of a wearable. Suppose that we have a computational vest, with nodes distributed uniformly across the vest with narrow interconnects between nodes. Some of these nodes are close to particular peripherals, so it makes sense to commit them to interfacing these peripherals to the network object bus.
Not only is processing spatially distributed to conform with the locality of resources, but it can also be distributed according to other constraints. For example, it might be useful to redistribute computation to balance the system's thermal load on the wearer, or perhaps to distribute signal-processing functions over an area corresponding to a multi-element sensing aperture (e.g. a phased antenna or microphone array).
Since our logic elements embody no particular computer architecture, we are free to specify one (or more). For example, although an object bus has been posited as the peer-to-peer interconnect, the nodes are free to embody any virtual machines compatible with the bus protocol. There are many choices to be made. For one, we might choose to initially have each node embody a Java VM, and later on reconfigure certain nodes to optimize their ability to fit certain computing metaphors. For example, we could implement a Linux-compatible PC as a subset of nodes comprising a virtual x86 processor, a VGA emulator, and an IDE disk emulator.
While the video games of a decade ago are dead or dying, their firmware runs on in countless emulators of those processors of yore and their supporting system architectures. The system we have described removes an unecessary layer of abstraction (i.e. a fixed-instruction-set commodity processor) and hews more closely to the original design. Such a system would do away with most of the compatibility issues that arise from the evolution of faster hardware.
Eventually, we'd like to touch on the matter of appropriate software environments. One nice feature of this system is that we expect to have a high degree of flexibility in defining the hardware, which in turn will allow the exploration of different computational paradigms.
To build a system that manipulates a particle in a trap, we want to be able to take the code written for a simulator and attach it to an actual physical system. Then we want to be able to replicate that system a millionfold and build an array of traps to assemble a universal microconstructor. To get to that stage, we need to address the fundamental issue of building networks of simple, reconfigurable computing devices to embed control and communications in things, in environments, and on people.