Tempic Integrations is a musical study that experiments with the use of multiple simultaneous tempi (the plural of tempo) in a musical composition. While Listening to the composition below:

• Can you hear the parallel musical tempi as they accelerate and decelerate relative to each other?
• Can you hear when the parallel musical tempi de-synchronize and re-synchronize?
• How can we design these tempo changes so the layers synchronize at defined musical moments? (Hint: calculus)
• How can we use this process musically?

### Motivation

Consider the factors that make a musical instrument expressive. The gold standard is the human voice. Can any other instrument be as expressive as the human voice? Probably not, though some may come close. Other instruments are still useful because they extend our capabilities.

This project explores a particular way of extending our sonic palette. It uses integral calculus to unlock a class of previously inaccessible rhythmic patterns. Specifically it shows how simultaneous musical tempi can continuously accelerate and decelerate relative to each other while coming in and out of phase at defined musical points as shown in the image above.

1. Design of a Mathematical algorithm for computing continuous tempo curves required for the polytempic accelerando shown in the image above.
2. Implementation of Python routines and development of work flow for generating, auditioning, and editing the patterns in a Digital Audio Workstation.
3. Composition of musical piece using algorithm implementation.

While listening to the composition, listen for the two melodic patterns:

• Both patterns play the same melody, one octave apart
• At the start of the piece both patterns play together at the same tempo, synchronized with each other, and with the kick drum.
• Both patterns accelerate to 1.5 times the original tempo. However, they accelerate at slightly different rates, so they over the course of the piece the go out of phase with each other
• At 0:57 the two parts re-synchronize with each other and with the drums.

For a detailed explanation of the background and mathematics (and a little bit of music history) read my blog post about Creating Tempic Integrations.

I worked on the audio at pretty much every level for the installation all about the human voice.

• Recording
• Sound Design
• Spec and build the 10.2 channel playback system
• Engineering and mixing - 10.2 channel version for the installation
• Engineering and mixing - 5.1 and stereo mixes for Commercial Release

This project was filled with interesting creative decisions from how to take advantage of the acoustics of the venue, and developing the workflow for 10.2 surround mixing. The 10 channel mix led to some creative and powerful surround tools that I plan on developing further as part of my master's thesis at the Media Lab.

There's lots more information on the Opera Of the Future page. The Installation opened in Paris, France in March, but it's going to be the first exhibit at the new Le Laboratoire Cambridge when it opens on October 30th! If you are around Boston, you can come see it for yourself.

Death and the Powers is a massive technical undertaking that keeps growing even more massive. The February 2014 performance in Dallas, TX added another layer of technology connecting the audiences across the globe.

Tod Machover's composition blends acoustic orchestra with carefully engineered and synthesized electronic samples to create the unique sound of the Opera.

Close mics from the orchestra, lavalieres on the singers, a variety of synthesizers, samplers, and 16 channel encoded ambisonic playback make up the 100+ channels to be mixed at the Front of house on a Studer Vista 5.1.

For the Dallas performance, we broadcast a multi-camera shoot with 5.1 and stereo mixes to 9 different theaters all around the world, leaving us with 3 simultaneous mixes.

1. Live mix for the PA in the hall, mixed from on a Studer Vista 5 at the front of house
2. Surround 5.1 mix for the simulcast mixed on a Studer Vista 1 in our makeshift sound studio
3. Stereo mix for the simulcast also mixed on the Studer Vista 1

Audiences in the remote venues were encouraged to download our mobile app that synchronized with the performance adding additional content and interactivity with the performance happening in Dallas.

I got to mix the two like simulcast mixes, and learned my way around the Studer Vista in the process. I've done lots of mixing before, but not a lot of live sound, so this was a very exciting opportunity. The Vista Console is also an amazing piece of gear. Every console has a predetermined amount of DSP all running on FPGAs. The assignment of the FPGAs is also totally customizable, so you choose the number of busses, sends, EQs Compressors, needed for your given show, and compile a virtual machine, loading it into the console before the show.

My own foray into "wearables". This unusual project came out of the experimental Media Lab class called Silicon Menagerie. In this class, we explored ways to augment human senses to simulate the kind of experience that non-human animals have. We looked at lots of different animals, including Angler Fish, Bats, Ants, and Sharks, all of which have the ability to perceive stimuli that humans cannot.

For inspiration, my group looked to the Honey Bee. Honey Bees have a fascinating array of sensory apparati. They have 5 eyes, they communicate through dance, they operate as a hive mind, and they have a very precise perception of time that enables them to use the position of the sun as a dependable reference point as it moves across the sky.

My group built a sensor network and heads-up-display mounted in a stylish wearable package inspired by the ulra-violet vision of bees.

Steel cable makes up the enclosure. 3D printed parts hold everything together

We used an iPod touch for the display, and an Arduino Mega for communicating with the sensors

I'm launching Pioneer (github).

• Full stack blogging application built with Meteor
• Written entirely in CoffeeScript
• Designed for simplicity, maintainability, and performance

This site, www.CharlesHolbrow.com is built using Pioneer.

Edit: Pioneer has been deprecated. This Blog is now a static bage built with Metalsmith.io.

For an optimist, not knowing what will happen next is the most exciting and wonderful feeling. “I just might record a #1 hit with my Indie Rock Revival Accordion String Band”. Then it takes courage to separate what could happen next from what is likely to happen next. Doing so means admitting “No, I probably won’t become a Rock Star bigger than The Beatles, and even worse, I’m probably not even going to win the lottery.” This realization might turn an optimist into a pessimist.

However, we mustn't let trifles stand in our way, and we certainly do not wish to be pessimists! So let us make accordion music not because we want fame or recognition, but because by doing so we find and share joy. Ah ha! Not feeling so pessimistic anymore!

...but I liked that feeling of wondering what will happen next, fantasizing about the good and fearing the bad. Now that I think about it, letting my mind wander through the possibilities inspired me to prepare for the hard times as much as it excited me for the good times. Excellent. I will continue to wonder optimistically about the future. My curiosity will inspire the motivation to find and love the good times and to weather the bad. I may bicycle across Alaska! I may share my wonder with my friends and family through art and music. I may adopt or father children and send this wonder to the next generation. I may wonder wonderful things as long as I live, for best of all: I may always wonder what will happen when I die. Brilliant! Optimism restored!

Wondering is exciting, and one of my favorite ways of wondering is wandering. In the city or the wilderness, I never know what is around the next corner. When I find out, there is always another corner not far ahead.

Inspired by my love for wandering, I am developing an iPhone game about exploration. The development process was wondrous itself: I started with only the minimalist (but open source) MOAI Mobile Framework, so I had to code nearly everything myself. I also made difficult decisions about structuring the code base (I am more familiar with the full featured game framework, Unity 3D, where most complex organizational decisions are already made). I am especially proud of the 2D lighting and shadows engine I coded in C++.