- I've added lots of new pages to my Projects hub as many things are coming into the public.
Well, this is certainly the biggest thing I've ever worked on. Maybe the biggest?
In November of 2014, Mindride was contracted to create a live X-Ray machine. What happened over the next several months took us down a really fun rabbit hole of motion capture that ended up with us having some sweet wireless mocap suits.
A straggler Touchpoint video has surfaced from April 2014...
Audiovisual display courtesy of Gabriel Rey-Goodlatte.
On March 6th 2013, the CalArts Music Technology MFAs collaborated on a new live film score to the 1992 film Baraka as our second semesterly concert. Since this project was not my own, I had not posted it into my projects hub.
Upon referencing it in the previous article about the Global Net Orchestra, I realized I didn't have a good audio stream of one of my pieces from it hosted anywhere. So now, this is a consolidation of the content I generated from that show.
"Untitled (Plinky)" had a couple of idea nuclei behind it:
- wanting to do a piece that made the pressing and releasing of the sustain pedal on a old, crappy-sounding piano sound HUGE.
- wanting to do a piece on an old crappy piano with the sustain pedal held down that's just descending down in chromatic 7ths (C-D-E-F-G-A-B going down an octave each time. for example). Obviously these ideas could live well together.
- convolving two unintelligible strains of speech synthesis together.
- big, big, big time The Girl with the Dragon Tattoo Soundtrack by Trent Reznor and Atticus Ross influence.
My piece with Raphael Arar was interesting because it was totally "instrumental". He improvised a plodding, dark march feel on piano that I controlled a reverb return to. I performed life effects processing (via Sugar Bytes Turnado) of samples of my niece Brooklynn struggling to say the alphabet that had been extremely time-stretched in SPEAR. I was locked to a timeline and Raphael wasn't.
We arrived at something that consistently took breaths in the same place from rehearsal to rehearsal but was somewhat improvisatory outside of that. It was cool.
In early February 2014, my professor Ajay Kapur brought to my attention the solicitation of performers for a 100-person "laptop orchestra"-style network music performance led by Roger Dannenberg, named the Global Net Orchestra. Excited at the opportunity to play in a network music piece and to work with Dr. Dannenberg (who is the original author of the open-source software Audacity that I use for my raw data experiments, and the developer of Synthetic Performer, among many other things!), I keenly volunteered.
Roger provided very detailed instructions for how we were to submit a long, prepared wav file that contained each note we would be playing with silence gaps in between, so that each member of the orchestra had their own voice. I decided to re-use a nice, buzzy Kontakt patch I had created many years ago at Berklee, which previously appeared as the ominous drone of this video. (The sound was recorded by another Berklee student some years before that, it's an electric shaver moving around a face that has been chromatically tuned.)
We had five rehearsals, of which I was able to attend the minimum two, since, for instance, some where held at 3AM to accommodate performers in East Asia! The experience of participation was part PLOrk, part Guitar Hero, part graphical score, and part drum circle. Representing Carnegie Mellon University, The final concert was on March 1st, and took place as part of the Ammerman Center for Arts & Technology 2014 Symposium at Connecticut College.
Among notable participants we could count Ge Wang and Pauline Oliveros, with Janelle Burdell serving as the ensemble's drummer, keeping time on a Zendrum. We played some Bach, we played a piece by Roger, we made some stuff up. Overall, there were 64 performers in the final concert, with hopes of doing it again sometime in the near future.
I didn't bother to figure out Squarespace's Embed widget until now. That's better ;)
Every year, the Music Technology program at CalArts puts out a compilation of some of the best tracks produced by students, whether from an assignment or just a personal project. They are selected by a curating member of the major and assembled for free on Bandcamp, alongside previous years' compilations, and other documents of live shows, like those from the ChucK concerts.
The compilation for the 2012 - 2013 school year arrived a little behind schedule, but it was quietly released on October 28th. It features two tracks by me.
- Raphael Arar - "Notes from the Apartment" (5:45)
- Lewis Godowski - "Durin's Bane" (4:17)
- JAEGER - "sun i want" (4:42)
- Nick Suda - "Morse" (6:34)
- Mark Morris - "Arctan" (4:18)
- Devin Ronneberg - "Hyfy" (4:44)
- Andrew Flores & Ashley Jacobson - "Senseless" (6:12)
- Rutaraj Wankhede - "Untitled" (4:52)
- Nick Suda & Andrew Flores - "Sneezestep" (4:59)
- Christopher Knollmeyer - "Ewan" (1:09)
- Jingyin He - "Stateless" (2:48)
- Youngmin Joo - "December Frost" (3:57)
- David Howe - "Cochlea Envy" (5:57)
- Bruce Lott - "Club 33 (Mutek Mix)" (4:23)
My first piece, "Morse," was a composition for Ajay Kapur's Composition for Robotic Instruments class. While the bulk of our semester was spent repairing Ajay's famous robots - such as MahaDeviBot, GanaPatiBot, and BreakBot (and also the RattleTron and Dimitri Diakopoulos' Glockenbot)- from the state they were left in following the move back into the Machine Lab from being used in Samsara, we also installed a new permanent robot, the Clapper, which is a series of solenoid strikers attached to blue LEDs that tap along a grid on the ceiling of the Machine Lab. Then, we very quickly assembled our compositions. They were presented at a concert event called Meet the Bots in late 2012 in the Machine Lab.
Each piece interpreted the idea of "composing" for the robots - which were radially distributed all around the room - very differently. Eric Singleton augmented a traditional Ableton Live-based through composition with reinforcement percussion from the robots. Jon He used a Monome playing John Conway's "Game of Life" to procedurally trigger accelerometer-augmented rhythms amongst the robots. Kameron Christopher leveraged neural network techniques with brainwave-reading sensors to play a piece with his mind. I just wrote a through composition using only the robots, which was a multi-section sort of thing that I've never quite written similarly to before.
I made a point of primarily basing the composition around Trimpin's permanent exhibit installation inside the Machine Lab - the JackBox. Many people avoid working with the JackBox when exposed to the robots because it's difficult to work with; its MIDI mapping is very complex across many different kinds of instrumentation (toy tennis rackets, shotglasses, single-stringed fretless guitar and bass necks), and because it operates in a mixed amplified and acoustic setting, and also responds quite inconsistently to the MIDI commands issued to it, in terms of velocity response and the pickups feeding back and such. Consequently, playback of my piece was pseudo-performative on my behalf, in the sense that I had to mix the electric stringed parts on the Machine Lab's mixing console as the piece was happening, even crawling off of a moment of feedback.
My second piece, "Sneezestep," was I think the third piece of music I ever composed and performed at CalArts, after the two from Decoding Dreams. It was a first attempt at wrenching myself away from through composition, of breaking the singer-songwriterly ways of working in ACID from the days of old to trying to figure out how to be performative. My partner Andrew Flores and I found a common ground in old, super-distorted, funky, aggressive tracks by LFO from the early 90's especially "Tied Up," and this became our main reference point for the ensuing track.
It was also the feature piece for some of my earliest compositional experiments with TouchOSC and later Lemur - and consequently the template which would eventually grow into the iPad-based instrument I'm creating now. There was also lots of use of using the Arpeggiator model to constantly re-trigger new sequences of two-bar phrases by mapping them to keys and holding them all down - one of my favorite Ableton Live tricks. It was a very rigid structure and in many ways a difficult project to realize, but that bubbling, industrial bassline I added at the last moment really gave the track what it needed.
I collaborated with fellow Music Technology MFA Jason Jahnke for my piece AutoLyrics in the Automaton concert I directed. While originally we planned for a multi-faceted combination of various audio and visual projects Jason was working on, time eventually condensed our work to getting the first operational code onto a new instrument he had literally just finished fabricating and assembling hours earlier called the AquaHarp II.
The AquaHarp was a glass-striking instrument that ran code behind the scenes which essentially randomly struck each of its glasses in a very long, slow, loop. The impact is a sharp tap on the center of the glass, and the glass used had a rather short decay period. AquaHarp was shown at the same Digital Arts Expo at CalArts that Noise Floor and TD Skillz came from.
AquaHarp II improves on the earlier iteration in a number of important ways. First, the striking mechanism has been replaced by an acrylic hammer shape from a narrow, sharp piston. The delay period between electrically telling the solenoids to start and stop striking acts somewhat like a MIDI velocity message, as well. Each glass position has a specially-sized mount for stabilization and security. The overall frame is lightweight and sturdy.
I had conceived of several ways to extend the algorithmic composition techniques of AutoLyrics to work with the AquaHarp II, and ultimately they worked perfectly and proved to be just as effective as I would imagine they would be... once we got the AquaHarp II responding to MIDI, which had never actually been done with the AquaHarp.
This became my eleventh-hour project for Automaton; I needed to find a way to get MIDI I/O into a standard Arduino Mega without breadboarding out the pins of a 5-pin (MIDI) DIN jack or using firmware flashes such as HIDuino from my friend and fellow CalArts MTIID MFA alum Dimitri Diakopoulos, which would have a higher learning time than I could afford at the moment.
Based on my flirty acquaintance relationship with the actual MIDI spec, I was thinking about how standard MIDI messages are usually just three binary bytes anyway (message type, status byte 1, status byte 2) - I dunno if it's serial or parallel or whatever over a MIDI cable or USB or wireless MIDI, but it's just three bytes - I should be able to get this into the serial monitor of a running Arduino without problems.
There were many problems to this, conceptually. While trying to use the officially-supported Arduino MIDI library, I realized many problems, such as the enormous mismatch in baud rate between the standard MIDI connection (31250) and most Arduino sketches (9600) would probably mean really asynchronous reception, and in fact things like a lack of a parity bit (which I had just recently been learning about in Advanced Circuit Design when discussing the creation of serial protocols) meant that when I received data, it would just come back as a garbled mess of truncated messages and line breaks.
I had even had some promising success from the other side of the signal chain by using USB/software MIDI-to-serial protocols such as the Hairless MIDI to Serial Bridge and even Serial to MIDI Converter, which is in fact just a wrapped application around Processing's serial library, and so it requires Java to run. (My program director, Ajay Kapur, probably would have insisted that we create something like this already, considering his preference for visualizing the reception of serial data using bar graphs in Processing.) And yet I just could not get coherent bytes to flow into my serial monitor. Just a few hours before the concert, I decided to scrap all my work, delete all this gross middleware and start fresh.
ChucK has - with only the most recent version 22.214.171.124, released to be ready for the new book on audio programming written by Ajay - added new, slightly unstable, mostly totally-undocumented serial I/O libraries. Ajay has already begun teaching a new crop of Interface Design for Music and Media students to begin demoing their ideas for instrument development exclusively in ChucK, rather than going through many other intermediaries such as Max. So, while the new crop of Interface Design students may produce a wealth of examples, this facility is otherwise undescribed in the present literature.
Yet, I knew ChucK would be an appropriate utility knife for this job, since I new it could receive and programmatically parse incoming MIDI events, and then ideally construct new serial events based on that reception - hypothetically, with much less runtime overhead than another DAW or middleware or some other non-"pseudo-interrupt style", event-based execution.
However, my present experience with constructing and transmitting various simple data types for efficicency sake over very simple protocols - bit masking and shifting operations and such - is brand new, so I really wasn't confident to develop my own specification with parity between Arduino and ChucK in time to understand what I was doing. The current (slightly hidden) example code for ChucK serial does come with one example Arduino sketch, as a matter of fact, but what it does (report "bar" whenever ChucK sends it a "hi!") is not very useful.
I knew how to receive and create serial in Arduino, but not do any kind of special parsing such as the .readBytesUntil() function used in the example code, and so this combined with my lack of knowledge about transmitting a specific ASCII char byte ("A", or "B", or "F", etc.) into the hack solution I developed.
Rather than just parsing a serial message's size being greater than 0 and flipping on a "switch," I filled the report with an integer reporting the size of the message until reception of a newline. So, even though "C" and "G" would both return only "1," that doesn't mean I can't have a dual correlation in the mapping C/DD/EEE/FFFF/GGGGG etc. That way I'd see the note itself in ChucK and the scale degree from the Arduino and get distinct sizeof reports from each position.
The result was thrown together in the truest essence of the idea - containing an inefficient case-checking algorithm with holes (no descrimination for note ons-only, just no note offs!) and the entire default example codes for both MIDI input and Serial output, but it worked and it was really exciting.
If you'd like to checkout the codebase (complete with one of my signature ridiculous README files), check the GitHub repository here.