I’m quite happy with how my granular synthesis engine turned out, but something was bothering me: it just wasn’t very good for any kind of live performance. There are so many parameters - which is great for a studio context but I can’t imagine trying to improvise or perform live with it. Even with a MIDI controller of some kind I would still only be able to reasonably change two parameters at a time.

GrainPlane is my response to this problem: it’s a physical interface built specifically for granular synthesis. It’s more or less a mechanism for creating a one to one mapping of physical grain action in the real world (rice, beans, sand, etc.) to auditory grains. Dropping a single grain onto the surface of the instrument triggers a single audio grain to play back. Letting a stream flow onto the surface creates a more densely layered texture. I was able to get the latency down to a pretty impressive 5ms or so - it’s relatively CPU heavy since all the calculations are being done at audio rate but I think it’s worth it.

The way it works is relatively simple, and has been done before for other applications: a set of Piezo contact mics are attached to the back of the surface and this audio information is processed by the Max/MSP patch running on the computer. With the combination of audio information from multiple sources and Max’s DSP engine, I can not only detect the impact of grains hitting the surface but I can also make some reasonable guesses at the grain’s relative size (or speed at which it strikes the surface) as well as its approximate X and Y coordinate location on the surface.

The software interface allows you to map any of the controller features to synthesis features. Grain pitch, pan, sample source, and length are all quite fun to play with.

I got the opportunity to present this instrument at the Audio Mostly conference in Sweden.

Download paper from ACM:

Or, download it here