Media Timelines

The world wide web was conceived 30 years ago. What will internet media look like after another 30 years? Should we expect it to be mostly the same? Or will it evolve rapidly in the coming years?

To gain insight into these questions, I have been studying the evolution of other media technologies, and identifying ways that they are both similar and different from the internet.

Media Timelines packages parts of this research in an interactive application that makes it easy to observe and compare the evolution of sound recording, film, and internet media technologies.

Try it out!

By employing a contextual zoom interface, Media Timelines makes it possible to effortlessly zoom from a high level historical overview of 20th century music and technology, to a detailed timeline showing the exact day that a music album was released. The interface exposes a growing database of information, details, and historical anecdotes on art, music, media, and history.

This technique for interacting with historical media content led to some surprising conclusions about the current state of internet media, and some interesting hypotheses about the future, both of which will form the foundation for my PhD dissertation.


This internet connected piano visualizer was made in one day as part of the Arts@ML Media Lab class. The project was the basis for a subsequent algorithmic EDM composition performed at a 99F event at the Media Lab.

Tempic Integrations

Tempic Integrations is a musical study that experiments with the use of multiple simultaneous tempi (the plural of tempo) in a musical composition. While Listening to the composition below:

  • Can you hear the parallel musical tempi as they accelerate and decelerate relative to each other?
  • Can you hear when the parallel musical tempi de-synchronize and re-synchronize?
  • How can we design these tempo changes so the layers synchronize at defined musical moments? (Hint: calculus)
  • How can we use this process musically?


Consider the factors that make a musical instrument expressive. The gold standard is the human voice. Can any other instrument be as expressive as the human voice? Probably not, though some may come close. Other instruments are still useful because they extend our capabilities.

Example of a simple polytempic accelerando

Example of a simple polytempic accelerando

This project explores a particular way of extending our sonic palette. It uses integral calculus to unlock a class of previously inaccessible rhythmic patterns. Specifically it shows how simultaneous musical tempi can continuously accelerate and decelerate relative to each other while coming in and out of phase at defined musical points as shown in the image above.

The project had three phases:

  1. Design of a Mathematical algorithm for computing continuous tempo curves required for the polytempic accelerando shown in the image above.
  2. Implementation of Python routines and development of work flow for generating, auditioning, and editing the patterns in a Digital Audio Workstation.
  3. Composition of musical piece using algorithm implementation.

While listening to the composition, listen for the two melodic patterns:

  • Both patterns play the same melody, one octave apart
  • At the start of the piece both patterns play together at the same tempo, synchronized with each other, and with the kick drum.
  • Both patterns accelerate to 1.5 times the original tempo. However, they accelerate at slightly different rates, so they over the course of the piece the go out of phase with each other
  • At 0:57 the two parts re-synchronize with each other and with the drums.

For a detailed explanation of the background and mathematics (and a little bit of music history) read my blog post about Creating Tempic Integrations.

De l’Expérience

De l’Expérience Performance. Photo Credit: Rébecca Kleinberger
De l’Expérience Performance. Photo Credit: Rébecca Kleinberger

I got to work on Tod Machover’s De l’Expérience in a number of capacities, including: sound design, recording the premier performance at the Orchestre Symphonique de Montréal orchestra hall, and mixing the resulting multi-tracks. We also used a custom surround sound panning and imaging system I developed for blending the live electronics with the organ in the venue.

The piece is a 23 minutes composition for organ, electronics, and spoken voice. The text is taken from the writings of Michel du Montaigne, the french author who invented the essay.

It’s pretty unusual for anyone to listen to 23 minutes of music on the web, but if you have the time and a decent sound sound system (no laptop or iPhone speakers please), the piece is pretty magnificent, and well worth listening to in its entirety.

Vocal Vibrations

The Installation at Le Labo Paris
The Installation at Le Labo Paris

I worked on the audio at pretty much every level for the installation all about the human voice.

  • Recording
  • Sound Design
  • Spec and build the 10.2 channel playback system
  • Engineering and mixing - 10.2 channel version for the installation
  • Engineering and mixing - 5.1 and stereo mixes for Commercial Release

This project was filled with interesting creative decisions from how to take advantage of the acoustics of the venue, and developing the workflow for 10.2 surround mixing. The 10 channel mix led to some creative and powerful surround tools that I plan on developing further as part of my master’s thesis at the Media Lab.

There’s lots more information on the Opera Of the Future page. The Installation opened in Paris, France in March, but it’s going to be the first exhibit at the new Le Laboratoire Cambridge when it opens on October 30th! If you are around Boston, you can come see it for yourself.

Mixing Death and the Powers: Global Interactive Simulcast

Powers Live
Powers Live

Death and the Powers is a massive technical undertaking that keeps growing even more massive. The February 2014 performance in Dallas, TX added another layer of technology connecting the audiences across the globe.

Tod Machover’s composition blends acoustic orchestra with carefully engineered and synthesized electronic samples to create the unique sound of the Opera.

Close mics from the orchestra, lavalieres on the singers, a variety of synthesizers, samplers, and 16 channel encoded ambisonic playback make up the 100+ channels to be mixed at the Front of house on a Studer Vista 5.1.

For the Dallas performance, we broadcast a multi-camera shoot with 5.1 and stereo mixes to 9 different theaters all around the world, leaving us with 3 simultaneous mixes.

  1. Live mix for the PA in the hall, mixed from on a Studer Vista 5 at the front of house
  2. Surround 5.1 mix for the simulcast mixed on a Studer Vista 1 in our makeshift sound studio
  3. Stereo mix for the simulcast also mixed on the Studer Vista 1

Audiences in the remote venues were encouraged to download our mobile app that synchronized with the performance adding additional content and interactivity with the performance happening in Dallas.

On the Console
On the Console

I got to mix the two like simulcast mixes, and learned my way around the Studer Vista in the process. I’ve done lots of mixing before, but not a lot of live sound, so this was a very exciting opportunity. The Vista Console is also an amazing piece of gear. Every console has a predetermined amount of DSP all running on FPGAs. The assignment of the FPGAs is also totally customizable, so you choose the number of busses, sends, EQs Compressors, needed for your given show, and compile a virtual machine, loading it into the console before the show.