This course will explore techniques for designing and implementing applications that simultaneously
* Present interactive interfaces via synthesized or scanned graphics, sound, video, gestural input, etc.
* Incorporate significant learning, representational, or problem-solving capabilities
Traditionally, the first category of program employs direct-manipulation graphics and event-driven interface techniques; the second relies on representation and inference technology from artificial intelligence. New intelligent media applications must incorporate both kinds of techniques, yet few current books or courses describe how to combine these smoothly in an application.
Examples of some of the issues we will address are:
* Inferring user intent from direct-manipulation actions
* "Agent" interfaces that provide intelligent assistance to users
* Visualizing a program's knowledge and representation structures
* Using object-oriented techniques to encourage generality of interface components and facilitate code re-use
The course will be based upon case studies of several existing interfaces and implementation techniques including: graphical editors with learning components, constraint-based and grammar-based automatic design systems, graphical browsers for knowledge representation structures, knowledge-based video editing systems, and agent-based personal assistants. During the semester, each student will be required to complete three or four implementation projects---either extend or modify an existing interface, or develop a new one---and present these to the class for analysis and discussion. Programming will be done in Lisp.