Wallpaper

In The Wallpaper, various elements of a visual and aural scene (humans, chairs, desks, ambient sounds) are recorded and stored separately from each other in a computer and then recombined in real-time during playback according to instructions in a script. Representing the scene in this way allows the creator of this piece to experiment with changing camera angles and shot selection, swapping characters, and modifying acoustical characteristics based on audience interaction in order to express different subjective points of view of the same story. The piece is also responsive to passive circumstances of its delivery, by showing more or fewer cuts and close-ups depending on the size and shape of the viewing window.

These two MPEG movies are two of many possible playouts of the production. One presents the story from John's subjective point of view, and the other presents Kathy's point of view. You will notice differences in shot composition, close-up placement, backgrounds, ambient sounds, and acoustics. The system is also capable of blending these two "extremes" to produce versions that mix the two story perspectives to different extents. In the actual piece, the viewer has real-time control over the subjective story perspective and the camera position.

The Isis object-based media prototyping environment was used to script the entire presentation. Several problems needed to be addressed: How could we restrict the spatial movement of the viewer? How would the presentation change to present different subjective points of view? How could we manage the master shot and close-ups in a manner that could change for different perspectives or display conditions? The approach taken was to create a small set of high-level presentation variables that would affect playout in different ways, each of which could be represented by a single number. Some or all of these variables then would be placed under the control of the viewer:

Camera pose:
We decided it would be best to restrict the viewer's motion to a specific path in the attic instead of letting her roam freely and possibly get lost and miss action. Therefore, a single number can represent the viewer's ``position'' along this predefined path of camera parameters, as opposed to the more than 10 separate numbers that would be needed for full camera freedom. The path used was specially chosen to pass very near the camera angles that were originally captured on the blue-screen stage.

Closeupivity:
We needed a parameter that would express whether the playout should contain more or less close-ups---a variable that could be controlled by knowing the size of the output screen. The higher the value, the more close-ups that would be likely to be cut in at various points in the presentation. The viewer would not have the ability to move spatially during a close-up.

Story perspective:
Another variable was needed to express what subjective point of view of the story should be presented. Its value ranges from 0 to 9, with 0 meaning John's perspective and 9 meaning Kathy's perspective. Any value between these two extremes represents different mixtures of the two perspectives. This parameter then would affect various aspects of the narration, such as the selection and placement of close-ups and backgrounds, ambient sounds, room acoustics, etc.

Scene time:
This fourth variable was needed to hold the ``current time'' of the presentation. It could be controlled interactively or, more likely, by an internal timer.

The timeline construct in Isis proved invaluable for expressing the camera path and the playout of the presentation over time and over the different story perspectives. The spatial position variable indexes into a timeline with specific camera parameters at various points.

Cubic interpolation was used to synthesize camera angles between these key points. These parameters were used to pre-render and store several views of the three-dimensional model of the attic which are recalled and used as backgrounds before the characters are composited into the frame. Since the actors were shot from a limited number of angles, the resulting composited output might look slightly odd if the virtual camera is far from any real captured angle.

As long as the camera stays within a certain ``distance'' of one of the real angles, this effect can usually go unnoticed. The script determines which actor view is the most appropriate for the current camera pose.

The other three system variables are used to create a three-dimensional story space to determine exactly what should be showing on the screen for a given scene time, story perspective, and closeupivity setting. Nested timelines greatly simplify the creation of this space inside of the Isis interpreter. At every point in the space, indication is given of whether or not a close-up should be shown, and if it should, a frame number of a particular close-up is specified along with a background to use behind it. Other aspects of the video could be controlled also, such as brightness or color tone, but the playback system does not currently support modifications to these attributes.

For example, at one particular point near the beginning of the scene, the viewer may see the master shot or a close-up of Kathy typing, depending on the size of the output window.

More interestingly, near the middle of the scene, you might see John grabbing the rag off of the dress form in a dark gray corner of the room, or you might see the same action superimposed over a bright cloud background, or you might see John grabbing a handkerchief from the hand of a third mysterious character, all depending on the current setting of the story perspective variable.

The virtual acoustics of the attic are smoothly interpolated between two extremes, as are the volumes of the ambient sounds in the attic. A small room with no echo or reverberation is at one extreme (John's perspective), while a much larger and softer room with a lot of reverberation is at the other extreme (Kathy's perspective).