What Was I Thinking?

a.k.a, iRemember, Memory Prosthesis

Sunil Vemuri 

vemuri at media dot mit dot edu

MIT Media Lab

Electronic Publishing Group

Advisor: Walter Bender

Update: While doing this research, I received numerous requests to make a memory aid available to the public. Thanks everyone for your encouragement! I now have something to offer. I have founded a company called QTech and, in January 2007, we launched reQall to the public.
Please try it (you just need a phone) and let me know what you think and what you think would make a great memory aid.

Videos: Memory: a Sanjay Gupta Special, CNN Explorers 2005, CNN Explorers 2006


The aim of this research project is to build technology to help people remember. The problem of forgetting should be familiar to most anyone and it is likely that the reader will not remember much about this paper in a week. Memory aids have existed for as long as one can remember. Strings tied on fingers, Post-it NotesTM, and Palm PilotsTM are all examples of tools people use to help remember. However, all of these require active effort in order for the memory to be triggered and none take advantage of the plethora of potential memory triggers available to the would-be rememberer. Computers can help.

Computers have reached the point in which continuous, verbatim recording of an individual's life experiences is technologically feasible. The challenge now is turning vast repositories of such recordings into a useful resource while respecting the social, legal, and ethical ramifications of ubiquitous recording. We have built a "memory prosthesis" that can be used to amass such data for the purpose of helping people with common, everyday memory problems.

Many things can serve as good memory triggers: the smell or taste of homemade cooking, the smile on a child's face, a good joke, the roar of the crowd when your sports team scores, etc. In our case, the device records audio from conversations and happenings, analyzes and indexes the audio in an attempt to identify the best memory triggers, and provides a suite of retrieval tools to help the wearer access memories when a forgetting incident occurs. In addition to the audio, other data about the event (e.g., location, weather, related email, calendar entries, etc.) are collected and indexed in case these also might help trigger memories of the event. The current implementation of the memory prosthesis device (running on an iPaq PDA) is shown below (close-up of screen on right).


The figure above summarizes the types of data recorded (left), the analyses done to index the recordings (middle), and shows screenshots (right) of the search tools used to help find memories. Again, the main data source for the project is recoding audio.

The computer attempts to determine the most important parts within the audio recordings. The analyses include trying to determine who was speaking, if it was a calm discussion or a heated argument, and if there were funny parts. The computer tries to pick a good set of short audio clips that, in turn, will serve as good memory triggers when someone is trying to remember something about the past event.

One of the more useful analyses involves converting the audio recording to text. This is done with a "speech recognizer." By converting the audio to text, keyword searching of what was spoken becomes possible for the countless hours of recorded speech. In a sense, it is like having a Google search engine for your past.

The user interface for browsing and searching through recordings is shown above. Recordings (yellow), calendar entries (blue), and weather data are shown. A keyword-search feature (upper right) is available with results shown in lower left. Though not shown above, the tool also allows the user to search through email that might also help serve as good memory triggers for a past, recorded event.

Interface for browsing, searching, and playing an individual recording.

As mentioned before, speech recognition software is used to convert the audio recordings into text. A sample transcript and interface are shown above. Transcription errors are common. To help users find information in light of such errors, our software renders text such that the brighter words correspond to those with a higher probability of being recognized correctly. Similarly, words displayed dimly are more likely to be incorrect. A phonetic, or "sounds like" search feature allows users to find words or phrases (results shown as yellow text) despite transcription errors. Our results suggest displaying error-laden transcripts in this manner in combination with phonetic searching is effective in helping people find the correct section of a long audio recording quickly, thereby finding a good memory trigger for the past event.

Software runs on Macintosh, Windows, Linux, and iPaq PDAs. Research at the MIT Media Lab continues to help improve the design of the "memory prosthesis."