Tempic Integrations

Tempic Integrations is a musical study that experiments with the use of multiple simultaneous tempi (the plural of tempo) in a musical composition. While Listening to the composition below:

  • Can you hear the parallel musical tempi as they accelerate and decelerate relative to each other?
  • Can you hear when the parallel musical tempi de-synchronize and re-synchronize?
  • How can we design these tempo changes so the layers synchronize at defined musical moments? (Hint: calculus)
  • How can we use this process musically?

Motivation

Consider the factors that make a musical instrument expressive. The gold standard is the human voice. Can any other instrument be as expressive as the human voice? Probably not, though some may come close. Other instruments are still useful because they extend our capabilities.

Example of a simple polytempic accelerando

This project explores a particular way of extending our sonic palette. It uses integral calculus to unlock a class of previously inaccessible rhythmic patterns. Specifically it shows how simultaneous musical tempi can continuously accelerate and decelerate relative to each other while coming in and out of phase at defined musical points as shown in the image above.

The project had three phases:

  1. Design of a Mathematical algorithm for computing continuous tempo curves required for the polytempic accelerando shown in the image above.
  2. Implementation of Python routines and development of work flow for generating, auditioning, and editing the patterns in a Digital Audio Workstation.
  3. Composition of musical piece using algorithm implementation.

While listening to the composition, listen for the two melodic patterns:

  • Both patterns play the same melody, one octave apart
  • At the start of the piece both patterns play together at the same tempo, synchronized with each other, and with the kick drum.
  • Both patterns accelerate to 1.5 times the original tempo. However, they accelerate at slightly different rates, so they over the course of the piece the go out of phase with each other
  • At 0:57 the two parts re-synchronize with each other and with the drums.

For a detailed explanation of the background and mathematics (and a little bit of music history) read my blog post about Creating Tempic Integrations.

De l’Expérience

De l’Expérience Performance. Photo Credit: Rébecca Kleinberger

I got to work on Tod Machover’s De l’Expérience in a number of capacities, including: sound design, recording the premier performance at the Orchestre Symphonique de Montréal orchestra hall, and mixing the resulting multi-tracks. We also used a custom surround sound panning and imaging system I developed for blending the live electronics with the organ in the venue.

The piece is a 23 minutes composition for organ, electronics, and spoken voice. The text is taken from the writings of Michel du Montaigne, the french author who invented the essay.

It’s pretty unusual for anyone to listen to 23 minutes of music on the web, but if you have the time and a decent sound sound system (no laptop or iPhone speakers please), the piece is pretty magnificent, and well worth listening to in its entirety.

Vocal Vibrations

The Installation at Le Labo Paris

I worked on the audio at pretty much every level for the installation all about the human voice.

  • Recording
  • Sound Design
  • Spec and build the 10.2 channel playback system
  • Engineering and mixing - 10.2 channel version for the installation
  • Engineering and mixing - 5.1 and stereo mixes for Commercial Release

This project was filled with interesting creative decisions from how to take advantage of the acoustics of the venue, and developing the workflow for 10.2 surround mixing. The 10 channel mix led to some creative and powerful surround tools that I plan on developing further as part of my master’s thesis at the Media Lab.

There’s lots more information on the Opera Of the Future page. The Installation opened in Paris, France in March, but it’s going to be the first exhibit at the new Le Laboratoire Cambridge when it opens on October 30th! If you are around Boston, you can come see it for yourself.

Mixing Death and the Powers: Global Interactive Simulcast

Powers Live

Death and the Powers is a massive technical undertaking that keeps growing even more massive. The February 2014 performance in Dallas, TX added another layer of technology connecting the audiences across the globe.

Tod Machover’s composition blends acoustic orchestra with carefully engineered and synthesized electronic samples to create the unique sound of the Opera.

Close mics from the orchestra, lavalieres on the singers, a variety of synthesizers, samplers, and 16 channel encoded ambisonic playback make up the 100+ channels to be mixed at the Front of house on a Studer Vista 5.1.

For the Dallas performance, we broadcast a multi-camera shoot with 5.1 and stereo mixes to 9 different theaters all around the world, leaving us with 3 simultaneous mixes.

  1. Live mix for the PA in the hall, mixed from on a Studer Vista 5 at the front of house
  2. Surround 5.1 mix for the simulcast mixed on a Studer Vista 1 in our makeshift sound studio
  3. Stereo mix for the simulcast also mixed on the Studer Vista 1

Audiences in the remote venues were encouraged to download our mobile app that synchronized with the performance adding additional content and interactivity with the performance happening in Dallas.

On the Console

I got to mix the two like simulcast mixes, and learned my way around the Studer Vista in the process. I’ve done lots of mixing before, but not a lot of live sound, so this was a very exciting opportunity. The Vista Console is also an amazing piece of gear. Every console has a predetermined amount of DSP all running on FPGAs. The assignment of the FPGAs is also totally customizable, so you choose the number of busses, sends, EQs Compressors, needed for your given show, and compile a virtual machine, loading it into the console before the show.

See Like A Bee

Cover Photo

My own foray into “wearables”. This unusual project came out of the experimental Media Lab class called Silicon Menagerie. In this class, we explored ways to augment human senses to simulate the kind of experience that non-human animals have. We looked at lots of different animals, including Angler Fish, Bats, Ants, and Sharks, all of which have the ability to perceive stimuli that humans cannot.

For inspiration, my group looked to the Honey Bee. Honey Bees have a fascinating array of sensory apparati. They have 5 eyes, they communicate through dance, they operate as a hive mind, and they have a very precise perception of time that enables them to use the position of the sun as a dependable reference point as it moves across the sky.

My group built a sensor network and heads-up-display mounted in a stylish wearable package inspired by the ulra-violet vision of bees.

Steel cable makes up the enclosure. 3D printed parts hold everything together

Early Build

We used an iPod touch for the display, and an Arduino Mega for communicating with the sensors

Simulated Bee Vision
Laia

Pioneer

I’m launching Pioneer (github).

  • Full stack blogging application built with Meteor
  • Written entirely in CoffeeScript
  • Designed for simplicity, maintainability, and performance

This site, www.CharlesHolbrow.com is built using Pioneer.

Edit: Pioneer has been deprecated. This Blog is now a static page built with Metalsmith.io.