Footage of Airflow v0.3 added to Airflow page

We just got a brand spankin' new version of Airflow going this past Friday! A nicer video documenting the changes will come soon, but for now, here's some raw phone footage of Ben riding the new system, which has been rather sloppily added to the Airflow project page:

A re-write of the Airflow page which re-frames the bulk of the discussion around the new system is probably forthcoming.

Lots of new articles and news!

  • I've added lots of new pages to my Projects hub as many things are coming into the public.
    • Check out the television game show pilot I worked on called Mind Control.
    • Check out the video art piece I made with Ben Kato called Oto wo sukurachi.
    • Check out the VR flying machine I've been building called Airflow.
    • The long overdue page for my MFA thesis Touchpoint is on the horizon, I promise!

"Love Has No Labels" project page added

Well, this is certainly the biggest thing I've ever worked on. Maybe the biggest?

In November of 2014, Mindride was contracted to create a live X-Ray machine. What happened over the next several months took us down a really fun rabbit hole of motion capture that ended up with us having some sweet wireless mocap suits.

I made a write-up about it which was been added to the Projects page. View it here.

"The Distillery" podcast interview

Sam Botstein had me on his show a few days ago, where I talked about being a music technologist working outside of music technology, why OSC and musicians got off on the wrong foot and how to fix it, the state of performing popular electronic music, Touchpoint (of course), and probably about 70 other things.

It's about 70 minutes long. It's a good opportunity to see how I really talk, in the absence of being able to compose my ideas. It also shows you how terrible I am at actually answering questions.

Never the less, it was a lot of fun. Be sure to check out other episodes of the show as well - Sam's interview with Tom Erbe is great.

More Touchpoint performances are online!

Lots of old video documentation of using Touchpoint has been uploaded to YouTube since I last updated!

Here was its first public appearance, processing the turntables of Sam Botstein at a Grids, Beats & Groups concert:

Next, we have the second performance in the Touchpoint Series. This is a three-player, serialized, networked session with Christopher Knollmeyer and Colin Honigman:

Audiovisual display courtesy of Gabriel Rey-Goodlatte.

Video of "Touchpoint Series #1" now online, Summer dev demo sessions playlist too

My solo recital in a set of three recitals, sampling my voice from a mic input, is now available in 720p on YouTube. Naturally this video will appear in a proper write-up on Touchpoint when the time comes, but for the time being, here it is outside of that.

Here is also a SoundCloud playlist of clips that were rendered during an earlier stage of development, around August 2013 in Berlin at Native Instruments headquarters:

"Baraka" MTIID MFA Show from Spring 2013

On March 6th 2013, the CalArts Music Technology MFAs collaborated on a new live film score to the 1992 film Baraka as our second semesterly concert. Since this project was not my own, I had not posted it into my projects hub.

Upon referencing it in the previous article about the Global Net Orchestra, I realized I didn't have a good audio stream of one of my pieces from it hosted anywhere. So now, this is a consolidation of the content I generated from that show.

"Untitled (Plinky)" had a couple of idea nuclei behind it:

  • wanting to do a piece that made the pressing and releasing of the sustain pedal on a old, crappy-sounding piano sound HUGE.
  • wanting to do a piece on an old crappy piano with the sustain pedal held down that's just descending down in chromatic 7ths (C-D-E-F-G-A-B going down an octave each time. for example). Obviously these ideas could live well together.
  • convolving two unintelligible strains of speech synthesis together.
  • big, big, big time The Girl with the Dragon Tattoo Soundtrack by Trent Reznor and Atticus Ross influence.

My piece with Raphael Arar was interesting because it was totally "instrumental". He improvised a plodding, dark march feel on piano that I controlled a reverb return to. I performed life effects processing (via Sugar Bytes Turnado) of samples of my niece Brooklynn struggling to say the alphabet that had been extremely time-stretched in SPEAR. I was locked to a timeline and Raphael wasn't.

We arrived at something that consistently took breaths in the same place from rehearsal to rehearsal but was somewhat improvisatory outside of that. It was cool.

Global Net Orchestra performance, March 1st 2014

In early February 2014, my professor Ajay Kapur brought to my attention the solicitation of performers for a 100-person "laptop orchestra"-style network music performance led by Roger Dannenberg, named the Global Net Orchestra. Excited at the opportunity to play in a network music piece and to work with Dr. Dannenberg (who is the original author of the open-source software Audacity that I use for my raw data experiments, and the developer of Synthetic Performer, among many other things!), I keenly volunteered.

Roger provided very detailed instructions for how we were to submit a long, prepared wav file that contained each note we would be playing with silence gaps in between, so that each member of the orchestra had their own voice. I decided to re-use a nice, buzzy Kontakt patch I had created many years ago at Berklee, which previously appeared as the ominous drone of this video. (The sound was recorded by another Berklee student some years before that, it's an electric shaver moving around a face that has been chromatically tuned.)

GNO software running during a rehearsal.

We had five rehearsals, of which I was able to attend the minimum two, since, for instance, some where held at 3AM to accommodate performers in East Asia! The experience of participation was part PLOrk, part Guitar Hero, part graphical score, and part drum circle. Representing Carnegie Mellon University, The final concert was on March 1st, and took place as part of the Ammerman Center for Arts & Technology 2014 Symposium at Connecticut College.

Among notable participants we could count Ge Wang and Pauline Oliveros, with Janelle Burdell serving as the ensemble's drummer, keeping time on a Zendrum. We played some Bach, we played a piece by Roger, we made some stuff up. Overall, there were 64 performers in the final concert, with hopes of doing it again sometime in the near future.

Read more about the event on Carnegie Mellon's website, check out the map of performers (I'm cloud_canvas in Valencia, CA!), and read a review in the Pittsburgh Post-Gazette.

Initial Raspberry Pi experiments

Last week, my girlfriend Amber and I headed to All Electronics.  I didn't realize she'd never been before; she was freaking out. We managed to suppress most (but not all) of our impulse buys.

What we absolutely could not resist, though, was this joystick. It was a part without a project: we needed to use it in something... it had the same sound as using a Street Fighter machine. (As a side note, it never occurred to me to realize that those joysticks were just four limit switches whose combinations gave you eight different control messages. Tight.)

They were also having a massive sale on really robust tour cases. And they had the big "American-style" flashy arcade buttons (like you'd see on a MIDI Fighter) out on display. That was it: "Let's try to repackage our games from last year's expo as a self-contained unit, with an embedded LCD monitor, arcade controls and 5V cell-phone power."

That was my excuse for buying two Raspberry Pi Model Bs. 

Installing Raspbian on the first unit.

I had heard warnings from some of my friends that this project would probably not end with success: apparently, Processing (which hosts most of the engine for our video games) runs terribly on the Pi - ostensibly from getting choked through a JVM.

I've flirted with the idea of triple-booting my MacBook Pro with Ubuntu before in a flight of power-user nerdiness, but I've never actually tried it. I knew that OS X is essentially a hot-rodded UNIX file system, and that any Apple customer who's comfortable with talking to Darwin in Terminal probably shouldn't be afraid of any distro. And so, when I got the Pis two nights ago, I delved into Raspbian, and figuring out how to install Processing and ChucK on it.

Comparison screenshots of some of my notes about the OS X port's HID mappings (right) vs. the Raspbian port's mappings (left).


Note that on the D-Pad, Raspbian doesn't even register button presses, just axial hat movement. This was completely unlike the other versions. 

I say "port" when I talk about the mouse, keyboard, OS X controller, Windows controller, Raspbian controller and Rail Bow versions of TD Skillz, but for the most part I just mean "HID object remappings."

ChucK has a great library object in it for HID devices. Like "hi" in Max, any joystick, racing wheel, or other weird peripheral can be identified as just a series of channels and values (usually 0 - 1). The strange thing is that for the same USB controller, different button presses (the "left" button, for instance) come up in totally different paths from operating system to operating system. So, each platform needs a new diagnostic test into the paths so that the source code downstream can be adjusted.

Raspbian's implementation was especially weird because it didn't even treat the D-Pad as button presses, it treated it like axial hat movement; as though they were two analog sticks with three possible values (-1, 0, and 1). So, this took a bit more of a rewrite in Processing than I expected, but it was still only a few minutes work.

Although it required spending several hours with the amazing "apt-get" feature of Linux quite intimately along the way (in getting ALSA/libsndfile/Bison/Flex/etc. etc.), the port worked "perfectly." Only problem is, it was awful. It was just as my friends said it was: unusably slow. Like, below one frame-per-second slow.

Oh well. I have two awesome cheap computers to run emulators and play with digital GPIO on in nice cases, now, anyway. I added the functional (if unusable) Raspbian port of TD Skillz to the GitHub repository, and the project page has likewise been updated.

Interested in getting your hands on a Raspberry Pi of your own? Visit element14 and pick one up.

3 new raw data experiments

Shortly before I returned to CalArts from Berlin, I fell into a trap of fancy that I haven't visited since 2010: raw data experiments. The act of taking the raw binary of non-audio file formats - be they compressed, pre-compiled, or just plain text files - and encoding them as digital audio of varying spatialization, bit rate, sample rate, dithering technique etc. using Audacity.  This is a great and seriously unsung feature of Audacity, and I love using it.

The big difference between last time's go and this one was that these ones were a bit more composed. The previous three ("FingerDIFFPARAM," "Aerobic,"  and "M1A1 Shapes") were purely expeditionary; they are each one complex computer file that, when encoded as digital audio, resulted in a song-length track of beautifully organized, circuit bent-sounding noise music.

With these new tracks, they are edited together and otherwise processed from several files. In "Rawgust," 4-5 medium length sound files were nested together to make one larger form whose repetitions appeared to be in song form. In some cases, I couched other files inside of longer files that had a perceptually silent gap in them on the order of a couple minutes.

In "Save0,"  the fragments were much more granular. Files between a tenth of a second and 10-15 seconds are edited together in rapid succession to make larger phrases.

In "Showa," I combined this favorite technique of mine with another: abusing de-noising algorithms. Finding two different renders of a file I liked quite a bit, but which had too much overall random, broadband noise, I fed them through an experimental build of iZotope RX3 in which the noise profile the plugin is fed is pretty much the entire signal itself, and then I turned the reduction settings to the most extreme available. The result of this sounds like a spectral music hot shower, which is why I called it "Showa," the Japanese loanword from English for "shower."

I find that files with a lot of nested redundancy, but overall a lot of variation on the macro level - as well as big chunks of pre-compiled data put together with uncompiled relationships - work wonderfully for this kind of process. Reaktor .ens files, Photoshop .psd files, and any big chunk of data from a game engine like Unreal Engine or Unity are particular favorites of mine.

In general, most files will just produce short bursts of noise. Text files, images, simple file formats like that will give you just a few milliseconds of static. But, based on some experimental principles worked out by a friend of mine, you can do things like make a high-frequency sawtooth wave by pasting "abcdefghijklmnopqrstuvwxyz" 44,100 times into a plaintext file; or even just a constant DC offset by writing millions of "j"s into a file. There is still much exploration left to do with this form of content mining. 

Do be advised that this process is not optimized to live in musically acceptable frequency ranges by default - some files will indiscriminately throw a near-full scale 18KHz at you, for instance.

I'm not sure what to do with these pieces of music, but I love them. Maybe they will live on as sample fodder for my future songwriting efforts or for others to sample, maybe they will be left as-is... Hell, I'd just as easily press them on vinyl a la Smojphace

Hello World!

Welcome to the new version of, enabled gracefully by Squarespace. While I had little issue with BlueHost as a server, and I still highly recommend them if you just want to hold a site without any very large files as a hosting platform, I had to admit that I'm no website designer. And thus, this new site where content can come first instead of awful, thoughtless formatting.

Hope you enjoy your stay!