Fluid Music: A New Model for Radically Collaborative Music Production

My Media Lab PhD Dissertation is available as a PDF:

Abstract

Twentieth century music recording technologies were invented to capture and reproduce live music performances, but musicians and engineers used these new tools to create the art of music production, something distinctly different from – and well beyond – simple archiving. Are current technologies poised to bring about new kinds of musical experiences yet again? Could these new kinds of music be every bit as transformative and impactful as recorded music was in the 20th century? Fluid Music proposes one possible trajectory by which this could happen: harnessing the internet’s power for massive asynchronous collaboration in the context of music production.

This dissertation articulates how Fluid Music proposes new tools for computational music production, prioritizes human agency via audio production paradigms found in digital audio workstations, and rejects existing collaborative processes like remixing and crowdsourcing. It describes the Fluid Music framework, a software toolkit which provides the foundation to build, experiment, and study Fluid Music workflows. It discusses the long-term goals of Fluid Music, including the construction of a massive, open repository of music production Techniques that sound designers, producers, and composers can use to share malleable sound worlds without reimplementing complex music production processes, demonstrating how design choices in the Fluid Music framework support the project’s larger objectives. One consequence of these design choices is that Fluid Music encapsulates the art of music production in the same way that recorded music encapsulates the art of music performance.

The dissertation lays out next steps that are required for Fluid Music to grow into an entirely new art form which is clearly distinct from the recorded music that is pervasive today.

Fluid Music Examples

The Fluid Music framework enables users to encapsulate professional music production techniques in reusable and reconfigurable JavaScript modules.

The following examples were scripted with the JavaScript based fluid score language and produced using the Fluid Music audio server. They aim to demonstrate that professional quality audio production can be automatically applied to symbolic digital scores.

The examples were generated completely in code. A GUI was only used in the final stages to trim and normalize the output of the Fluid Music audio server.

Example 1: Processed Guitar

See the score repository on github for the code that created this audio. To make the audio sound polished I packaged the following music productions techniques using the fluid music framework:

  • A sub-bass synthesizer adds low end to the kick drum, giving it extra OOMPH!
  • Reversed and processed guitar samples from the @fluid-music/g3rd npm package are time aligned with the musical grid.
  • Tempo synchronized stereo width modulation (this technique is subtly in the example, but can by much more dramatic).
  • A subtle reverb ‘glues’ the mix together.

These are just some of the kinds of productions techniques that can be encapsulated with the fluid music library. Used with care, techniques like these are a big part of what make prerecorded music stand out and sound professional.

Example 2: Trap Drums

Trap music is built around manipulated TR-808 drum samples. The next example uses the @fluid-music/tr-808 npm package as source material for the drum and bass samples.

  • Custom timing techniques applied to the drum samples to achieve the characteristically complex trap-style hi-hat patterns.
  • Micro timing adjustments shift drum samples off of the timeline grid so that the samples’ transients don’t mask each other when they overlap.
  • Subtle reverb and bus compression ‘glue’ the mix together.
  • This second Trap example builds on the techniques in the first one, expanding the sonic palette without duplicating effort
  • Side chain compression (with “ghost kicks”) pump one of the pad synthesizers in time with the musical rhythm
  • The @fluid-music/kit this package works like a drum sampler preset with features like sample randomization and dynamic layers.

Example 3: Seven and Five

In Seven and Five I tried to make a slightly longer composition that holds interest throughout. Additionally, I tried to make something that would be infeasible or impossible to make without Fluid Music.

The piece is based around a MIDI delay (echo), except that each delayed copy contains mutated MIDI content. The delays are grouped in five repetitions of seven notes, giving the composition its name.

Note that the code in the score repository is more complex that the previous examples, so it’s a more challenging starting point if you are just exploring Fluid Music for the first time.

  • Custom techniques insert up to 70 MIDI notes per invocation.
  • Features the unusual 35/32 time signature
  • Uses computationally generated custom Technique Libraries that create the arpeggiated patterns that make up the harmonic foundation of the piece.

Example 4: Nikhil Sing’s Fluid Adaptations

Composer Nikhil Sing worked with the Seven and Five score to compose Fluid Adaptations. His composition keeps the same harmonic material, mutates the existing timbres and adds new ones.

His score also converts the 35/32 time signature to 4/4, a change that would be difficult or impossible to do without the Fluid Music framework.

Read Nikhil’s full score on GitHub at nikhilsinghmus/fluid-demotrapremix. By reading the updated package.json file, we can also see that the score adds a dependency on the Battery 4 plugin adapter.

Fluid Music Beta Workshop

Fluid Music is an extensible music composition and music production framework for Node.js, currently in closed beta. (closed what?)

If you are interested in computational or procedural music production, please consider joining in the closed beta by participating in a two-part Fluid Music workshop. Fill out the workshop interest form, and you will receive a follow-up email shortly with more information.

Workshop participants will:

  • Learn the Fluid Music API foundations
  • Programmatically create and render DAW sessions
  • Code a reusable sound design technique
  • Create music by sharing sound design techniques with other participants

Participants will be invited to a two-part workshop:

  1. Introduction & Fluid Music Basics January 22 or 23, 1pm - 2:30pm
  2. Developing Custom Techniques January 24 or 25, 1pm - 2:30pm

Each workshop will consist of a pre-recorded tutorial video and a subsequent zoom meeting. I will also hold “Office Hours” to answer questions and troubleshoot technical issues.

What is Fluid Music?

A quick overview of the fluid-music npm package:

An example project (note that this video is for an earlier version of Fluid Music):

How is Fluid Music different from SuperCollider, CSound, etc?

There are already several code-based languages for audio design. Why do we need another one? Existing tools like CSound, SuperCollider, Max, and PD are useful for live coding and for building experimental and interactive audio tools with digital primitives like Oscillators and Filters. However:

  • Fluid music aims to be useful for creating music that people care about. Currently, this means that it has to complement and integrate with a Digital Audio Workstation (DAW) and plugin (VST) based workflow. Fluid Music creates DAW session files as opposed to audio or MIDI files.

  • Fluid Music is designed around JavaScript and the npm ecosystem. It is made to enable reuse and sharing of sound design techniques. If you have a favorite collection of audio samples, encapsulate them in an npm package, and computationally insert them into your sessions. Automate sound design techniques that you use often. Publish and import them using npm. Create music by mixing and matching existing fluid-music npm packages.

Workshop prerequisites

You do need be comfortable with node.js and npm, and have at least a little familiarity with music production. Otherwise, there are no prerequisites - I would love participants to have a range of different backgrounds! However, there are a limited number of seats, so availability will depend on the level of interest.

It will helpful if you have used one of Reaper or Tracktion Waveform DAWs, but you can learn the basics of either one in an afternoon if needed.

Why are you holding the workshops?

I’ve been developing the Fluid Music system as part of my PhD dissertation within the Opera of the Future Group at the MIT Media Lab. The goal Fluid Music system is to enable many users to create and share compatible tools and techniques, and to use those tools and techniques for producing music.

To be broadly useful, Fluid Music needs input from other software developers and music producers. So I am looking for input! I will incorporate feedback into the analysis section of my dissertation (by participating, you will also be helping me graduate 🙏).

How much time will it take?

In addition to attending the workshop, plan to spend:

  • 30 minutes installing and setting up Fluid Music prior to the first workshop
  • 30 minutes in a follow-up interview (if you agreed when filling out the interest form)
  • Plan to spend some time outside of the workshop coding and composing. How much time? It is flexible, and depends on how far you want to push the coding and composition.

Your mileage will depend on your comfort with the core tools (Node, npm, Reaper).

The fluid-music library comes with support for some free and paid VST plugins. If you want to use them, plan to spend some additional time downloading and installing them (you can add support for your favorite VST plugins, but we won’t cover this in detail during the workshop).

Next steps

Interested in participating? Fill out the workshop interest form.

If you have further questions, please email me!

Getting Started Tutorials

To prepare for the first workshop, follow either the video or text-based tutorial below. Choose one:

To complete the tutorial, you’ll need to install cybr, the Fluid Music sound server. I’ll email a link to the executable so you don’t need to compile it.

Examples

Additional Resources

You can get started with fluid music by studying the following resources, but it will be much more efficient to learn this material by participating in the workshops.

Media Timelines

The world wide web was conceived 30 years ago. What will internet media look like after another 30 years? Should we expect it to be mostly the same? Or will it evolve rapidly in the coming years?

To understand these questions, I have been studying the evolution of other media technologies. In the process I am working to identifying ways that they are both similar to, and different from the internet.

Media Timelines packages this research in an interactive application that makes it easy to visualize the evolution of sound recording, film, internet, and other media technologies.

Try it out!

By employing a contextual zoom interface, Media Timelines makes it possible to effortlessly zoom from a high level historical overview of 20th century music and technology, to a detailed timeline showing the exact day that a music album was released. The interface exposes a growing database of information, details, and historical anecdotes on art, music, media, and history.

This technique for interacting with historical media content led to some surprising conclusions about the current state of internet media, and some interesting hypotheses about the future, both of which will form the foundation for my PhD dissertation.

Cybernetic Media Production

In the early stages of my PhD, I was working on tools for creating, testing and iterating on augmented reality music videos. I called this system Cybernetic Media Production. Before discarding the idea, I developed a custom augmented reality framework. Here it is in action:

This is how it works:

  • I mounted an HTC Vive “Puck” Tracker on a DSLR camera
  • I wrote a utility that uses Valve’s OpenVR C++ library to access the HTC Vive tracking data
  • I created a 3D scene using the open source tools in OpenFrameworks.
  • Using the tracking data from the camera to I position a virtual camera in the 3D Space
  • Using the tracking data from the HTC Vive controller, I put virtual objects into the scene, and layer them over the live feed from the DSLR

During development, I discovered that it is the coincidence of many small details that makes the AR illusion effective. The latency of the video feed has to be perfectly matched with the latency of the tracking data. The field of view of the virtual camera needs to be carefully calibrated with the “real” camera. The virtual objects need to be smooth and I found that it was important to optimize the rendering pipeline. Efficiently rendering pretty, smooth lines in 3D is surprisingly difficult.

The idea behind Cybernetic Media Production was to procedurally create many different versions of a music video, and then A/B test the audio and video content the same way that big companies like Amazon and Microsoft test different versions of their websites. Microsoft was able to use A/B testing to increase advertising revenue by hundreds of millions of dollars. At the time, I was thinking “shouldn’t musicians be able to do the same thing?”

I believe that Cybernetic Media Production has commercial value. I also believe that some version of this practice is likely inevitable. The technology isn’t quite there to make it happen yet.

Initially, I intended to spend my PhD developing the technology to make Music Video A/B testing a reality. For every creative or ethical question I answered, another obstacle emerged. What kinds of artistic parameters should be A/B tested? Will A/B tested content really offer new creative opportunities? The major web companies do a good job of measuring and manipulating the public for profit, but what are the creative advantages to thinking about artwork the way that the Big Four think about web pages?

With guidance from my advisor and dissertation committee, questions like these led me to reconsider. Eventually I did move my dissertation work away from what you might call the Amazon of music production toward the development of infrastructure to support the Wikipedia of music production. Ask me about this journey if your are curious – It’s an interesting story, and it speaks to unique characteristics within the MIT Media Lab’s unconventional academic environment.

However, I was able to re-purpose the my AR framework for something Joyful. In the video below, the projections are entirely created by members of the audience who are manipulating the 3D Vive controllers in real time.

Arts@ML

This internet connected piano visualizer was made in one day as part of the Arts@ML Media Lab class. The project was the basis for a subsequent algorithmic EDM composition performed at a 99F event at the Media Lab.