Home | Research | Publications | CV | Blog

I am a Postdoctoral Associate at the MIT Media Lab in the in the Robotic Life Group. My work centers around building machines with social intelligence. As computers, machines, and robots increasingly become a larger part of our daily lives, they will have to be easier for us to interact with. In particular I am interested in machines that are meant to exist in human environments and interact with everyday people. In these situations, the person working with the robot should not be required to learn a new form of interaction. Machines meant to learn from people can better take advantage of the ways in which people naturally approach teaching. This work has interconnected goals: improving the performance of a machine's learning behavior through attention to human interaction and improving the experience of the human teacher by designing interactive learning algorithms based on how people teach. This page details several examples of my research in Social Machine Learning over the past few years.

Socially Guided Machine Learning: Leonardo

Learning via Guided Exploration
| Video: LeoGuidedExploration |

A social learner should be able to learn on its own, but take advantage of a human partner when available. This system is motivated to explore the environment and learn about novel events. Additionally, a human partner can influenced this exploration through: attention direction, action suggestions, labeling goal states, and positive/negative feedback. This Guided Exploration system has to frame its own learning problems, creating task goals and expanding a policy to reach the goal.

Learning a Task and Generalizing the Goal
| Video: LeoTaskLearning | Paper: IROS 2004 |

In this work, a human partner is able to teach Leo via a social dialog. Through multiple trials, the teacher leads him through the task, giving feedback and refinement to failed attempts. In the proces, Leo makes multiple hypotheses about what the goal could be, and through interaction with the teacher, he refines this representation.

Task Learning and Collaboration
| Video: LeoLearningAndCollab | Papers: AAMAS 2004, IJHR |

This project is about teaching specific tasks to a robot with the goal of being able to subsequently perform these tasks together with the robot in collaboration. The robot should be able to learn the overall task goal from a natural interaction, and then be able to execute the task in collaboration, taking turns and evaluating who should complete the various subgoals of the task.

Studies of Leo's Nonverbal Expression
| Paper: IROS 2005 |

We conducted user studies with Leonardo, where people were asked to teach Leo the names of three buttons, and to ask him to turn each of the buttons on. While seemingly simple, various points lead to communication breakdown (speech recognition errors, pointing detection errors, etc.). In this study we learned that Leo's nonverbal communication (eye gaze, subtle nods and shrugs) was most helpful for the human in these error conditions. People were able to understand and correct the problem to successfully complete the task.

Learning from Ambiguous Demonstrations
| Video: LeoPerspTaking | Papers: RAS, AAAI 2006 |

While a human user may provide a sensible demonstrations from their perspective, it may be ambiguous from the perspective of the robot's learning algorithm. We present a socially engaged learner that is able to correctly learn from "flawed" demonstrations by taking the visual perspective of the human instructor to clarify potential ambiguities.

Social Emotional Referencing
| Video: LeoSocialReferencing | Paper: RO-MAN 2005 |

This work demonstrates Leo's social attention mechanism, emotion system, and appraisal mechanism. Each is interesting in its own right, but when working together they lead to a higher level ability. Social referencing is a mechanism preverbal children use to assess unknown situations and things, and we show that robots can similarly use this type of skill to make assessments of the unknown via interaction with humans.

Socially Guided Machine Learning: Sophie

Sophie's Kitchen: Interactive Reinforcement Learning
| Demo: Teach Sophie (Req. Java 1.4.2+; on PC use Firefox or IE, for Mac use Safari) |
| Paper: RO-MAN 2006 |

How do humans want to teach machines? Sophie's World is a web-deployable java platform for experimenting with Socially Guided Machine Learning. In the initial experiment, 18 non-expert users trained Sophie to bake a cake. We made three main observations about teaching behavior: People assume they can guide the agent, they actively adjust to better fit the learner, and they have a strong positive bias in rewards.

Adding Guidance to Interactive Reinforcement Learning
| Paper: AAAI 2006 |

Having found people assume they can communicate about the future, we added a guidance mechanism to the Sophie agent and modified the RL algorithm to incorporate this guidance. The modified version was tested with non-expert users and found to improve learning on several dimensions.

Using Transparency in Interactive Reinforcement Learning
| Paper: ICDL 2006 |

Human learners help a teacher by revealing internal state, making the process transparent. We modified Sophie to use gaze as a transparency device, reflecting the relative certainty of the current action selection task. In tests with non-expert users this communication improves the guidance received from the human teacher, showing that through transparency a machine learner can improve its own learning environment.

Learning Social Intentions by Observation

ThoughtStreams: Simulating Commonsense
| White Paper: ThoughtStreams |

This project presents simulation as an alternative solution to the problem of commonsense knowledge acquisition. This position stems from the fact that commonsense is essentially redundant shared experience over time, and posits that reality computer games (like The SIMs) could be a forum in which computers could gain this shared experience from a human user. The implementation and initial results of a learning program, ThoughtStreams, has shown to have some initial success in predicting commonsensical future events based on past experiences in a simple simulation game of a shopping mall.

DriftCatcher: Understanding Implicit Social Context in Electronic Communication
| Papers: MIT Masters Thesis, INTERACT 2003 |

My master's thesis was about social context conveyed in email. Can a computer recognize social intentions in electronic communication? I built a SVM classifiers on a training data set of email, to recognize 8 social contexts (informing, inquiring, supportive, etc.), and a personal social network modeling agent system. I then built a webmail client, DriftCatcher, that utilizes this information to display email in social terms rather than the current temporal context of mail browsers. I also ran a user study to evaluate how the addition of social context in the email interface helps people understand and use their social networks.

Modeling User Intentions for HCI Applications

LAFCam: Leveraging Affective Feedback Camcorder
| Paper: CHI 2002 | Website: more details |

This is a system for recording and editing home video, and is an example of a digital appliance that senses the user and the environment in order to have a better opportunity to enhance the user's experience. LAFCam facilitates the process of browsing and provide automatic editing features by indexing where the camera operator laughed and visualizing the skin conductivity and facial expressions as indicators of interesting footage in the editing session.

Cheese
| Paper: CHI 2001 |

Cheese extends conventional web interfaces, that responsd to and consider only mouse clicks when defining a user model, and takes into account all mouse movements on a page as an additional layer of information for inferring user interest. We have developed a straightforward way to record all mouse movements on a page, and conducted a user study to analyze and investigate mouse behavior trends. Certain mouse behaviors were found to be common across many users, and are useful for content providers by increasing the effectiveness of their interface design.

Eye-aRe
| Paper: CHI 2001 |

This is a glasses-mounted eye motion detection device. It is designed to detect and communicate the intentional information conveyed in eye movement. Eye-aRe notices patterns of eye motion like gazing (that indicates attention) and blink rate (that can indicate stress level). It stores this information, and then transfers it to various external devices in order to promote an enriched experience with the environment.