Text

Deathless


I just finished a new piece of music. This is a different direction for me, and one I’m excited about. With one exception (sampled marimba) all sounds on it are acoustic instruments and household objects, all played by me (and by Jon Davis on bass clarinet). It’s also a substantial move away from dance music.

Fifteen years ago when I got more serious about writing music, I was going to a lot of dance parties. Most of my friends and musical peers were involved in that world. I wanted to write music that would work on headphones, but could also work on a dance floor. It was a real revelation when I realized, this year, that that isn’t important to me anymore. I almost never go to dance parties, and I’m not really interested (mostly) in performing in that context. But it’s going to be a bit of a slow process — all the methods I’ve developed for over a decade have been about creating heavily loop-based, beat-driven materials. But I’m committed to making that shift.

I took a five-day retreat away from work and internet and phone and people in rural Tennessee a couple of weeks ago, and starting working on this piece then. I read Cat Valente’s remarkable novel _Deathless_ while I was there, and it was the core source of inspiration for this music.

Instruments in the piece are:
Classical Guitar
Fujara (Slovakian shepherd’s flute)
Bass Clarinet
Sampled Marimba
Cigarette butts
Coffeepot
Oil lamp
Cabinets

You can hear the piece here: https://soundcloud.com/tinpanalgorithm/deathless

Text

Recent performance

I recently performed for a celebration of a new downtown mural. The mural was by Jimmy O’Neal, and based on cymatics, so I wanted to focus on mathematical, process-based work. I took the three finished data-based pieces I had done with Sonify and recast them as slow, evolving soundscapes. I’m linking to all three here, although there was a fair amount of live manipulation as well, so these recordings don’t give a complete sense of what the performance was like. See previous entries for more info on how these pieces were made.

Global Drifter

Clustered Sines

Climate Reference

I’m also including a short, ambient version of Millenium Hand, which I threw together just as something to cover the transitions between the three main pieces.

Millennium Hand

Text

Current project

I’m mostly interested right now in building a virtual audio installation. I’m interested in the compositional side, not the programming side, so I was hoping to find a turnkey tool for doing that, probably a game engine, but there just doesn’t seem to be one out there with the level of audio realism that I want (or even anything out there that handles obstruction/occlusion, so that eg an audio source inside a room sounds different depending on whether the listener is inside or outside the room).

Admittedly I’m a bit limited by being a Mac user, since most game development happens on Windows boxes. But there are several major game engines there with development environments for Mac, and a major piece of audio middleware too (FMOD Studio), and none of them will give me what I need.

I’ve been in correspondence with some folks associated with UNC Chapel Hill, which has a lot of cool projects coming out of it. One that seems promising is GSound (check out the kickass demo), but that’s just a C++ API. I haven’t ever written in C++, and it has a reputation as a language with a steep learning curve, and like I said: I mainly just want to write music for this kind of environment, not build it myself. But I’m not sure I see a better option. Possibly I can write a minimal GSound application, not much more than the example code that they give, and use IPC (or conceivably bindings for GSound) to communicate with an app in another language, or even an existing game engine. Dunno, investigating that is where I’m at right now. It’s been a couple of weeks of research, and I’m a bit frustrated at this point.

Suggestions very welcome.

Text

Chronotope

Hey folks,

I’ve just released an EP, “Chronotope.” You can stream it (for free) and/or buy it on Bandcamp.

http://tinpanalgorithm.bandcamp.com/

Text

CRN output

Here’s a piece of audio made from Climate Reference Network data. You’re hearing six stations spread across the continental US (WA, CA, ND, TX, ME, GA). It’s a year’s worth of data (2012) for four hourly measurements: air temperature, soil temperature (10 cm), soil temperature (50 cm), and solar radiation. Each station becomes a synthesizer (Reaktor, running an FM synth) which plays the same note repeatedly for the duration of the piece. So for six stations, there are six notes: a C-major chord superimposed on a D-minor chord. For each synthesizer, four  parameters are controlled by the readings: three parameters are the amount of FM depth is applied between various oscillators, and the fourth is the decay length on the envelope for the main oscillator.

I’m focusing on temperature measurements and so the structure of the piece is determined by the cycle of the seasons. It starts in midwinter, progresses to midsummer, and by the end of the piece it’s back to midwinter. You’ll notice that the beginning and ending of the piece sound pretty similar as a result.

At the time compression rate I’m using, one day of data becomes one second. Because I chose a tempo (120) that fits a whole second well, there’s a secondary cycle that’s audible every two beats, caused by diurnal patterns of temperature and solar radiation.

Each station/synth is playing its note at a slightly different rate, causing the shifts in rhythmic feel over the course of the piece (see Whitney Music Boxes for a nice visual illustration of this).

This will end up in a dance music setting, most likely; it’ll have some percussion and bass and incidental bits, but nothing that will obscure the clarity of hearing the days and the seasons through a complete cycle.

http://www.tinpanalgorithm.com/workinprogress/CRN1.mp3

Text

Global Drifter - details

There’s a fair amount of info in earlier posts about Sonify itself, but here’s a bit more info on the processes specific to Global Drifter.

The buoy data is transformed into the sounds that you hear at the beginning of the piece and that run throughout. They’re essentially just left to run through the whole piece, although a couple of times I lowpass them for a short time while I introduce new parts of the piece (the lowpass frequency drops abruptly and then does a slow rise back up). You’re hearing six buoys which start very near each other (about midway between Cuba and West Africa) and then diverge over a two-year period (all of 2011 and 2012). What this means is that the six buoys start out with a pretty similar timbre, and then get less similar over the course of the piece.

You can hear just the buoy sounds by themselves here.

Each buoy is being voiced by an identical instance of the Zebra soft synthesizer, each one with its own distinct pitch which is constant throughout the piece. Each buoy is reporting three measurements (latitude, longitude, and temperature), each of which is being mapped to a parameter of the corresponding Zebra instance. Latitude is mapped to oscillator waveform, which is shifting between a square wave and a spike. Longitude is mapped to the intensity of a combined bandpass and bandreject filter being applied to the oscillator. The oscillator runs into an FM oscillator, and temperature is mapped to the rate of an LFO controlling FM depth.

Those are purely aesthetic choices. There are a handful of mapping possibilities that make some kind of objective sense — for example, I could have mapped longitude to stereo positioning — but generally it’s fairly arbitrary. I did want it to be possible, with a little practice, to know roughly what the data are doing based on the sound. Temperature is the easiest — LFO rate stands out, especially when it gets fast, and so you know that the buoys which have that fast LFO sound at any given time are the ones that have drifted into warmer waters.

Once I had the buoy part done, the rest was just about putting it in context (in this case, an electronic dance music context). Mostly I added percussion and bass. My intention was to keep the setting fairly simple so that the sonified data could be the primary element. Probably the sound that competes with it the most is the skittery high-pitched percussion that comes in around 2:30, which is the modified sound of me whacking on some bamboo, recorded with a Zoom stereo field recorder. The first sound that comes in other than the buoys (around 0:30) is an instrument I built from a sample of noise from (if I remember right) a very early phonograph recording; it felt really right to me in this piece because somehow it reminds me of sonar.

Again, see earlier posts for more info on the Sonify API that I created to do this (or see the Sonify page itself). If anyone who’s read this far is interested in working with the API, feel free to reach out for support; I’m very interested in having other folks use it.

Text

Global Drifter

Finally, the first complete piece made from ocean current data is finished!

https://soundcloud.com/tinpanalgorithm/global-drifter

Text

Southbound Train

And now for something completely different: here, finally, is the mix for Southbound Train, now with MCing by Mark Davis.

https://soundcloud.com/tinpanalgorithm/southbound-train

Text

First music made with buoy data

Here’s the (interpreted) sound of six Global Drifter buoys wandering the ocean between Cuba and West Africa, for a two-year period ending this New Year’s Day:

http://www.tinpanalgorithm.com/workinprogress/GlobalDrifter02_02_01.mp3

Made with my Sonify API.

Text

Soft Lockdown

Here’s the first piece constructed using Sonify, the API I’ve been writing. The source data is sine waves of varying frequencies. Sonify -> MIDI -> Ableton, where I added percussion and bass and a bit of this and that. The title is from this article.

http://soundcloud.com/tinpanalgorithm/soft-lockdown