Comusica is a virtual choir piece created for MIT's 2020 virtual commencement. It brings together over 800 vocal submissions, each one singing a single note, from graduating students into a single musical work, based on a score by Evan Ziporyn.

As part of the Comusica team, I worked with Eran Egozy on designing algorithms to computationally arrange the crowd-sourced vocal samples into a template outlined by the score. With over 800 individual submissions, it was impossible to manually place each voice clip so as to correspond with the score, allow a range of voices to be represented, and mix together elegantly.

To address this, we developed a computational approach that analyzed the vocal samples, performed a first round of automatic editing, and flagged samples for manual refinement (pitch correction, denoising, declipping, etc.). Participants' submissions often contained additional auditory material, high levels of background noise, diverged from the indicated pitches, or exhibited other forms of variation. Our goal was not to remove all variation, but to soften it enough that voices could blend together and represent the score while exhibiting the diversity of voices in the graduating class comunity. We then use a variety of heuristics to determine an arrangement based on the clips' auditory features, the sung notes, the need for notes at given points in the score, etc. Our approach incorporated constraints to ensure smooth musical transitions and a natural chorus-like quality as the voices combined, and also to evenly balance the representation of different contributions throughout the piece, as much as possible. We also defined a suite of interactive tools to allow swapping, moving, and adjusting individual samples, and then performed a manual mix of the many layers into the final audio.

More information about Comusica An Article in the Boston Globe