20 Ames Street
December 1, 1992
News in the Future
Near- and Long-term Examples of
News in the Future
Principal Investigator, News in the Future
Near- and Long-term Examples of News In The Future
Three technologies are altering the relationship between mass media and the individual: (1) the plethora of digital communication channels, (2) the ubiquity of personal computing, and (3) advances in display technology. As a consequence, it is no longer a question of if or when , but how and by whom that news will be delivered to a consumer which is most reflective of that consumer's needs and interests, at the time it is needed, and in a form that is most useful or convenient.
Caveat: In order to take advantage of this new infrastructure, we need to ensure that the channel and intelligence at both ends are properly embodied. For example, an image architecture that is interlaced is useless for high-quality text and graphics, and makes artifact-free international distribution nearly impossible.
News in the Future will manifest itself as a valuable service, available to augment or enhance all aspects of personal and interpersonal communication. But how do we get there? Any refashioning of the news industry will require a comprehensive look at content, the consumer, and presentation. Consequently, we have proposed to explore these three fundamental areas:
Partial understanding will enable more fine-grained searching, culling, sorting, and augmentation of news than is currently possible. These are tools which help authors and readers alike to organize information. True machine comprehension will enable summarization, abstraction, headlining, and self-organization .
Professor Ken Haase is directing the work in automatic and semi-automatic tools for generating a content description automatically from english text. Professor Haase is also developing selection and search mechanisms using a content-layer on news materials. A long term research goal is to understand the content of news stories by analogy. The basic research is not guaranteed to be technologically tractable in the medium term (we just don't know). However, tools/expertise for using a content-layer (perhaps added manually or semi-automatically) are well within our grasp now.
Chris Schmandt is directing the development of tools for automatically building a structure for audio news, e.g. radio. Without structure, "conversing" with news is impossible. Audio is much more difficult to segment and categorize than text: Simply identifying word and phrase boundaries is challenging. In the short term, Mr. Schmandt will be developing automatic methods of analysis of broadcast audio based upon changes in tempo, pitch, pauses, etc. He is also building robust interfaces to audio in light of current deficiencies the segmentation and analysis.
Trying to identify which news items a person will want to see involves two hard problems: figuring out what the article is about, and figuring out what the person wants to see. Note that personalization does not mean "no editor", nor does it necessarily imply interactivity.
At the Media Laboratory we built an electronic version of Popular Photography that was enhanced by adding some knowledge of individual readers. Every month, Popular Photography presents an article reviewing three new cameras. We add a fourth camera to the review article, the one owned by the reader. In addition, we use entries from the reader's appointment calendar to customize advertising. If, for example, there is an article about Evandro Teixeira, and the reader is going to be in New York City on the weekend, an advertisement for the gallery on 57th Street which has Mr. Teixeira's work on display is included as a side-bar to the article.
To reiterate, personalization and interactivity are orthogonal. In the examples described above, the experience was enhanced by personalization, without significant changes to the style of presentation or interface. I can read or watch the Daily Me with or without a joystick.
We are developing a general purpose user modeling system which is being used in support of consumer-directed content analysis. Our approach is both comprehensive and implicit . Three observations about user modeling:
Moving between the three areas is akin to moving along a continuum between the "Daily Me" and The Boston Globe.
The user model, like the active badge , will be something you wear or carry in your pocket. Privacy will be protected because the release of information from the model is gated by the user.
Professor Pattie Maes is investigating machine learning as applied to news retrieval. In order to learn and follow up on the user's changing news interests a news system, used on a regular periodic basis, has to manage a balance between "specialization" and "exploration." Professor Maes is employing genetic algorithm techniques and an evolution metaphor to evolve a population of agents that retrieve news. While some agents are improving "recall", other agents are scouting for new areas of interest. Agents which are not successful in retrieving articles of interest to the consumer terminate.
"Advertising for ads" is a restructuring of the relationship between advertizer and consumer. Indeed, an experiment at a local community cable system owned by Adams Russell illustrated that the goal of the consumer is not to avoid ads. Whether it is "ads as we need them", such as your car asking your TV to start showing tire ads, or advertisements as information like in the Popular Photography example, advertising in conjunction with user modeling will be more directed and more useful to all parties.
The role of presentation in News in the Future is twofold: to counterweight the inadequacies of the content analysis and consumer modeling, and to adapt to the varying resources available to the consumer.
Some media objects will be able to present themselves effectively through a variety of presentation modes (e.g., video, audio, text, and still). Context-sensitive media objects will be able to determine which modes to use based on the display capabilities of the applications, the available resources, and the needs or preferences of the user. For example, the same news article may present itself in full audio, video and text at the breakfast table, but switch to an audio only presentation when taken onto a crowded subway.
Context-sensitive media will be able to fit several purposes. The same media presentation may be called on in either an instructional or entertainment setting. A single presentation may be shown to different age groups or different nationalities, and it will be expected to adjust itself accordingly. Media with this ability will be not only the valuable and versatile members of a shared media library, but also the key to truly personalized applications that together achieve a coherent whole from multiple sources.
The newspaper display of today is a forgiving display. The combination of abstraction and a fine granularity of content make it as easy to skip over things you aren't interested in as find the things you are interested in. In addition, it is interruptible, it has a transparent look ahead, and a feeling of infinitude. It is the interface which helps mediate between the reader and the news. We want to provide many of these same qualities to news presentation in general, regardless of the medium.
Professor Glorianna Davenport is building a facile interface to television news by trying to balance a narrative presentation with the ability to seek out more information, either from live footage or from archives. Professor Davenport will be exploring issues such as authoring television news to enable dynamic re-editing.
Chris Schmandt is developing a conversational interface to audio news which has many of the same features. Mr. Schmandt will be answering questions of conversing with the news fluently while contending with imperfect voice recognition. Automating the restructuring of audio news to enable such presentations is a long term research goal.
Since there is a readership of one, there has to be individual preparation of each presentation. Professor Muriel Cooper is tackling the problems of automatic design and layout. She is developing an automatic pagination system which has text, video, and audio primitives.
Professor Neil Gershenfeld is going to be addressing the issues of permanence and impermanence of paper: reusable paper for printing in the home, and force sensing and generation to provide tactile I/O for information systems, e.g. the feel of paper.
The final topic and perhaps most important issue: How to attract a new generation of readership? Professor Mitchel Resnick will be conducting a number of experiments in which news will be a part of learning. The foundation of his approach is Professor Seymour Papert's Constructionist Theory of Education. News, whether retrieved from the wire or produced by the students themselves, will be construed as a service, i.e., a building block integral to all activities in the classroom. The primary test-bed of the work will be the Hennigan School of the Future, where Professors Papert and Resnick have been conducting experiments for the past 6 years.
In my opening statement, I argued that the questions of electronic news delivery are not if or when , but how and by who . I cannot to tell you who will be delivering the news in the future, but here at the Media Lab, you will get a glimpse of our vision of how it will be delivered.