We just got a brand spankin' new version of Airflow going this past Friday! A nicer video documenting the changes will come soon, but for now, here's some raw phone footage of Ben riding the new system, which has been rather sloppily added to the Airflow project page:
A re-write of the Airflow page which re-frames the bulk of the discussion around the new system is probably forthcoming.
- I've added lots of new pages to my Projects hub as many things are coming into the public.
Well, this is certainly the biggest thing I've ever worked on. Maybe the biggest?
In November of 2014, Mindride was contracted to create a live X-Ray machine. What happened over the next several months took us down a really fun rabbit hole of motion capture that ended up with us having some sweet wireless mocap suits.
A straggler Touchpoint video has surfaced from April 2014...
Sam Botstein had me on his show a few days ago, where I talked about being a music technologist working outside of music technology, why OSC and musicians got off on the wrong foot and how to fix it, the state of performing popular electronic music, Touchpoint (of course), and probably about 70 other things.
It's about 70 minutes long. It's a good opportunity to see how I really talk, in the absence of being able to compose my ideas. It also shows you how terrible I am at actually answering questions.
Never the less, it was a lot of fun. Be sure to check out other episodes of the show as well - Sam's interview with Tom Erbe is great.
Audiovisual display courtesy of Gabriel Rey-Goodlatte.
My solo recital in a set of three recitals, sampling my voice from a mic input, is now available in 720p on YouTube. Naturally this video will appear in a proper write-up on Touchpoint when the time comes, but for the time being, here it is outside of that.
Here is also a SoundCloud playlist of clips that were rendered during an earlier stage of development, around August 2013 in Berlin at Native Instruments headquarters:
On March 6th 2013, the CalArts Music Technology MFAs collaborated on a new live film score to the 1992 film Baraka as our second semesterly concert. Since this project was not my own, I had not posted it into my projects hub.
Upon referencing it in the previous article about the Global Net Orchestra, I realized I didn't have a good audio stream of one of my pieces from it hosted anywhere. So now, this is a consolidation of the content I generated from that show.
"Untitled (Plinky)" had a couple of idea nuclei behind it:
- wanting to do a piece that made the pressing and releasing of the sustain pedal on a old, crappy-sounding piano sound HUGE.
- wanting to do a piece on an old crappy piano with the sustain pedal held down that's just descending down in chromatic 7ths (C-D-E-F-G-A-B going down an octave each time. for example). Obviously these ideas could live well together.
- convolving two unintelligible strains of speech synthesis together.
- big, big, big time The Girl with the Dragon Tattoo Soundtrack by Trent Reznor and Atticus Ross influence.
My piece with Raphael Arar was interesting because it was totally "instrumental". He improvised a plodding, dark march feel on piano that I controlled a reverb return to. I performed life effects processing (via Sugar Bytes Turnado) of samples of my niece Brooklynn struggling to say the alphabet that had been extremely time-stretched in SPEAR. I was locked to a timeline and Raphael wasn't.
We arrived at something that consistently took breaths in the same place from rehearsal to rehearsal but was somewhat improvisatory outside of that. It was cool.
In early February 2014, my professor Ajay Kapur brought to my attention the solicitation of performers for a 100-person "laptop orchestra"-style network music performance led by Roger Dannenberg, named the Global Net Orchestra. Excited at the opportunity to play in a network music piece and to work with Dr. Dannenberg (who is the original author of the open-source software Audacity that I use for my raw data experiments, and the developer of Synthetic Performer, among many other things!), I keenly volunteered.
Roger provided very detailed instructions for how we were to submit a long, prepared wav file that contained each note we would be playing with silence gaps in between, so that each member of the orchestra had their own voice. I decided to re-use a nice, buzzy Kontakt patch I had created many years ago at Berklee, which previously appeared as the ominous drone of this video. (The sound was recorded by another Berklee student some years before that, it's an electric shaver moving around a face that has been chromatically tuned.)
We had five rehearsals, of which I was able to attend the minimum two, since, for instance, some where held at 3AM to accommodate performers in East Asia! The experience of participation was part PLOrk, part Guitar Hero, part graphical score, and part drum circle. Representing Carnegie Mellon University, The final concert was on March 1st, and took place as part of the Ammerman Center for Arts & Technology 2014 Symposium at Connecticut College.
Among notable participants we could count Ge Wang and Pauline Oliveros, with Janelle Burdell serving as the ensemble's drummer, keeping time on a Zendrum. We played some Bach, we played a piece by Roger, we made some stuff up. Overall, there were 64 performers in the final concert, with hopes of doing it again sometime in the near future.
I didn't bother to figure out Squarespace's Embed widget until now. That's better ;)
Most of my January was occupied by preparing the three lectures for my class "Creative DSP: Composing with SoundBytes" for Interim 2014 at CalArts. I prepared a lot of small Reaktor ensembles and Ableton Live sessions to demonstrate a lot of little didactic examples in-class. I even resurrected a few ancient Logic Pro 9 sessions dating back to 2009 and 2010 to examine the completed compositions of a few times that I've done this in the past, such as with "Come Back to Eggwall," as well as newer attempts, like "Angercore."
We even focused on topics like using "black box" standalone software and the advanced DSP-based techniques of established computer musicians, like Paul Lansky and Curtis Roads. We discussed FFT and convolution and the ways in which it can be used compositionally, alongside many pieces I've written in the past that showcase all of these ideas, such as "I Started Seeing Things" (letting a "black box" give you your main compositional ideas), "Noiseless (movement 1)" (sound file convolution as composition, also "movement 2", which is FFT/phase vocoder-based), "Ghost House Ladies" (cross synthesis, non-destructive transposition, making disparate elements live together), and "I'm Scared Stiff" (listening to the source material closely and pulling sprechstimme out of it).
So, as a best-foot-forward, I made a track using this clip, going over the session to view all the application contexts when needed, as with drum synthesis, for example.
Every year, the Music Technology program at CalArts puts out a compilation of some of the best tracks produced by students, whether from an assignment or just a personal project. They are selected by a curating member of the major and assembled for free on Bandcamp, alongside previous years' compilations, and other documents of live shows, like those from the ChucK concerts.
The compilation for the 2012 - 2013 school year arrived a little behind schedule, but it was quietly released on October 28th. It features two tracks by me.
- Raphael Arar - "Notes from the Apartment" (5:45)
- Lewis Godowski - "Durin's Bane" (4:17)
- JAEGER - "sun i want" (4:42)
- Nick Suda - "Morse" (6:34)
- Mark Morris - "Arctan" (4:18)
- Devin Ronneberg - "Hyfy" (4:44)
- Andrew Flores & Ashley Jacobson - "Senseless" (6:12)
- Rutaraj Wankhede - "Untitled" (4:52)
- Nick Suda & Andrew Flores - "Sneezestep" (4:59)
- Christopher Knollmeyer - "Ewan" (1:09)
- Jingyin He - "Stateless" (2:48)
- Youngmin Joo - "December Frost" (3:57)
- David Howe - "Cochlea Envy" (5:57)
- Bruce Lott - "Club 33 (Mutek Mix)" (4:23)
My first piece, "Morse," was a composition for Ajay Kapur's Composition for Robotic Instruments class. While the bulk of our semester was spent repairing Ajay's famous robots - such as MahaDeviBot, GanaPatiBot, and BreakBot (and also the RattleTron and Dimitri Diakopoulos' Glockenbot)- from the state they were left in following the move back into the Machine Lab from being used in Samsara, we also installed a new permanent robot, the Clapper, which is a series of solenoid strikers attached to blue LEDs that tap along a grid on the ceiling of the Machine Lab. Then, we very quickly assembled our compositions. They were presented at a concert event called Meet the Bots in late 2012 in the Machine Lab.
Each piece interpreted the idea of "composing" for the robots - which were radially distributed all around the room - very differently. Eric Singleton augmented a traditional Ableton Live-based through composition with reinforcement percussion from the robots. Jon He used a Monome playing John Conway's "Game of Life" to procedurally trigger accelerometer-augmented rhythms amongst the robots. Kameron Christopher leveraged neural network techniques with brainwave-reading sensors to play a piece with his mind. I just wrote a through composition using only the robots, which was a multi-section sort of thing that I've never quite written similarly to before.
I made a point of primarily basing the composition around Trimpin's permanent exhibit installation inside the Machine Lab - the JackBox. Many people avoid working with the JackBox when exposed to the robots because it's difficult to work with; its MIDI mapping is very complex across many different kinds of instrumentation (toy tennis rackets, shotglasses, single-stringed fretless guitar and bass necks), and because it operates in a mixed amplified and acoustic setting, and also responds quite inconsistently to the MIDI commands issued to it, in terms of velocity response and the pickups feeding back and such. Consequently, playback of my piece was pseudo-performative on my behalf, in the sense that I had to mix the electric stringed parts on the Machine Lab's mixing console as the piece was happening, even crawling off of a moment of feedback.
My second piece, "Sneezestep," was I think the third piece of music I ever composed and performed at CalArts, after the two from Decoding Dreams. It was a first attempt at wrenching myself away from through composition, of breaking the singer-songwriterly ways of working in ACID from the days of old to trying to figure out how to be performative. My partner Andrew Flores and I found a common ground in old, super-distorted, funky, aggressive tracks by LFO from the early 90's especially "Tied Up," and this became our main reference point for the ensuing track.
It was also the feature piece for some of my earliest compositional experiments with TouchOSC and later Lemur - and consequently the template which would eventually grow into the iPad-based instrument I'm creating now. There was also lots of use of using the Arpeggiator model to constantly re-trigger new sequences of two-bar phrases by mapping them to keys and holding them all down - one of my favorite Ableton Live tricks. It was a very rigid structure and in many ways a difficult project to realize, but that bubbling, industrial bassline I added at the last moment really gave the track what it needed.
I collaborated with fellow Music Technology MFA Jason Jahnke for my piece AutoLyrics in the Automaton concert I directed. While originally we planned for a multi-faceted combination of various audio and visual projects Jason was working on, time eventually condensed our work to getting the first operational code onto a new instrument he had literally just finished fabricating and assembling hours earlier called the AquaHarp II.
The AquaHarp was a glass-striking instrument that ran code behind the scenes which essentially randomly struck each of its glasses in a very long, slow, loop. The impact is a sharp tap on the center of the glass, and the glass used had a rather short decay period. AquaHarp was shown at the same Digital Arts Expo at CalArts that Noise Floor and TD Skillz came from.
AquaHarp II improves on the earlier iteration in a number of important ways. First, the striking mechanism has been replaced by an acrylic hammer shape from a narrow, sharp piston. The delay period between electrically telling the solenoids to start and stop striking acts somewhat like a MIDI velocity message, as well. Each glass position has a specially-sized mount for stabilization and security. The overall frame is lightweight and sturdy.
I had conceived of several ways to extend the algorithmic composition techniques of AutoLyrics to work with the AquaHarp II, and ultimately they worked perfectly and proved to be just as effective as I would imagine they would be... once we got the AquaHarp II responding to MIDI, which had never actually been done with the AquaHarp.
This became my eleventh-hour project for Automaton; I needed to find a way to get MIDI I/O into a standard Arduino Mega without breadboarding out the pins of a 5-pin (MIDI) DIN jack or using firmware flashes such as HIDuino from my friend and fellow CalArts MTIID MFA alum Dimitri Diakopoulos, which would have a higher learning time than I could afford at the moment.
Based on my flirty acquaintance relationship with the actual MIDI spec, I was thinking about how standard MIDI messages are usually just three binary bytes anyway (message type, status byte 1, status byte 2) - I dunno if it's serial or parallel or whatever over a MIDI cable or USB or wireless MIDI, but it's just three bytes - I should be able to get this into the serial monitor of a running Arduino without problems.
There were many problems to this, conceptually. While trying to use the officially-supported Arduino MIDI library, I realized many problems, such as the enormous mismatch in baud rate between the standard MIDI connection (31250) and most Arduino sketches (9600) would probably mean really asynchronous reception, and in fact things like a lack of a parity bit (which I had just recently been learning about in Advanced Circuit Design when discussing the creation of serial protocols) meant that when I received data, it would just come back as a garbled mess of truncated messages and line breaks.
I had even had some promising success from the other side of the signal chain by using USB/software MIDI-to-serial protocols such as the Hairless MIDI to Serial Bridge and even Serial to MIDI Converter, which is in fact just a wrapped application around Processing's serial library, and so it requires Java to run. (My program director, Ajay Kapur, probably would have insisted that we create something like this already, considering his preference for visualizing the reception of serial data using bar graphs in Processing.) And yet I just could not get coherent bytes to flow into my serial monitor. Just a few hours before the concert, I decided to scrap all my work, delete all this gross middleware and start fresh.
ChucK has - with only the most recent version 18.104.22.168, released to be ready for the new book on audio programming written by Ajay - added new, slightly unstable, mostly totally-undocumented serial I/O libraries. Ajay has already begun teaching a new crop of Interface Design for Music and Media students to begin demoing their ideas for instrument development exclusively in ChucK, rather than going through many other intermediaries such as Max. So, while the new crop of Interface Design students may produce a wealth of examples, this facility is otherwise undescribed in the present literature.
Yet, I knew ChucK would be an appropriate utility knife for this job, since I new it could receive and programmatically parse incoming MIDI events, and then ideally construct new serial events based on that reception - hypothetically, with much less runtime overhead than another DAW or middleware or some other non-"pseudo-interrupt style", event-based execution.
However, my present experience with constructing and transmitting various simple data types for efficicency sake over very simple protocols - bit masking and shifting operations and such - is brand new, so I really wasn't confident to develop my own specification with parity between Arduino and ChucK in time to understand what I was doing. The current (slightly hidden) example code for ChucK serial does come with one example Arduino sketch, as a matter of fact, but what it does (report "bar" whenever ChucK sends it a "hi!") is not very useful.
I knew how to receive and create serial in Arduino, but not do any kind of special parsing such as the .readBytesUntil() function used in the example code, and so this combined with my lack of knowledge about transmitting a specific ASCII char byte ("A", or "B", or "F", etc.) into the hack solution I developed.
Rather than just parsing a serial message's size being greater than 0 and flipping on a "switch," I filled the report with an integer reporting the size of the message until reception of a newline. So, even though "C" and "G" would both return only "1," that doesn't mean I can't have a dual correlation in the mapping C/DD/EEE/FFFF/GGGGG etc. That way I'd see the note itself in ChucK and the scale degree from the Arduino and get distinct sizeof reports from each position.
The result was thrown together in the truest essence of the idea - containing an inefficient case-checking algorithm with holes (no descrimination for note ons-only, just no note offs!) and the entire default example codes for both MIDI input and Serial output, but it worked and it was really exciting.
If you'd like to checkout the codebase (complete with one of my signature ridiculous README files), check the GitHub repository here.
Last week, my girlfriend Amber and I headed to All Electronics. I didn't realize she'd never been before; she was freaking out. We managed to suppress most (but not all) of our impulse buys.
What we absolutely could not resist, though, was this joystick. It was a part without a project: we needed to use it in something... it had the same sound as using a Street Fighter machine. (As a side note, it never occurred to me to realize that those joysticks were just four limit switches whose combinations gave you eight different control messages. Tight.)
They were also having a massive sale on really robust tour cases. And they had the big "American-style" flashy arcade buttons (like you'd see on a MIDI Fighter) out on display. That was it: "Let's try to repackage our games from last year's expo as a self-contained unit, with an embedded LCD monitor, arcade controls and 5V cell-phone power."
That was my excuse for buying two Raspberry Pi Model Bs.
I had heard warnings from some of my friends that this project would probably not end with success: apparently, Processing (which hosts most of the engine for our video games) runs terribly on the Pi - ostensibly from getting choked through a JVM.
I've flirted with the idea of triple-booting my MacBook Pro with Ubuntu before in a flight of power-user nerdiness, but I've never actually tried it. I knew that OS X is essentially a hot-rodded UNIX file system, and that any Apple customer who's comfortable with talking to Darwin in Terminal probably shouldn't be afraid of any distro. And so, when I got the Pis two nights ago, I delved into Raspbian, and figuring out how to install Processing and ChucK on it.
I say "port" when I talk about the mouse, keyboard, OS X controller, Windows controller, Raspbian controller and Rail Bow versions of TD Skillz, but for the most part I just mean "HID object remappings."
ChucK has a great library object in it for HID devices. Like "hi" in Max, any joystick, racing wheel, or other weird peripheral can be identified as just a series of channels and values (usually 0 - 1). The strange thing is that for the same USB controller, different button presses (the "left" button, for instance) come up in totally different paths from operating system to operating system. So, each platform needs a new diagnostic test into the paths so that the source code downstream can be adjusted.
Raspbian's implementation was especially weird because it didn't even treat the D-Pad as button presses, it treated it like axial hat movement; as though they were two analog sticks with three possible values (-1, 0, and 1). So, this took a bit more of a rewrite in Processing than I expected, but it was still only a few minutes work.
Although it required spending several hours with the amazing "apt-get" feature of Linux quite intimately along the way (in getting ALSA/libsndfile/Bison/Flex/etc. etc.), the port worked "perfectly." Only problem is, it was awful. It was just as my friends said it was: unusably slow. Like, below one frame-per-second slow.
Oh well. I have two awesome cheap computers to run emulators and play with digital GPIO on in nice cases, now, anyway. I added the functional (if unusable) Raspbian port of TD Skillz to the GitHub repository, and the project page has likewise been updated.
Interested in getting your hands on a Raspberry Pi of your own? Visit element14 and pick one up.
Shortly before I returned to CalArts from Berlin, I fell into a trap of fancy that I haven't visited since 2010: raw data experiments. The act of taking the raw binary of non-audio file formats - be they compressed, pre-compiled, or just plain text files - and encoding them as digital audio of varying spatialization, bit rate, sample rate, dithering technique etc. using Audacity. This is a great and seriously unsung feature of Audacity, and I love using it.
The big difference between last time's go and this one was that these ones were a bit more composed. The previous three ("FingerDIFFPARAM," "Aerobic," and "M1A1 Shapes") were purely expeditionary; they are each one complex computer file that, when encoded as digital audio, resulted in a song-length track of beautifully organized, circuit bent-sounding noise music.
With these new tracks, they are edited together and otherwise processed from several files. In "Rawgust," 4-5 medium length sound files were nested together to make one larger form whose repetitions appeared to be in song form. In some cases, I couched other files inside of longer files that had a perceptually silent gap in them on the order of a couple minutes.
In "Save0," the fragments were much more granular. Files between a tenth of a second and 10-15 seconds are edited together in rapid succession to make larger phrases.
In "Showa," I combined this favorite technique of mine with another: abusing de-noising algorithms. Finding two different renders of a file I liked quite a bit, but which had too much overall random, broadband noise, I fed them through an experimental build of iZotope RX3 in which the noise profile the plugin is fed is pretty much the entire signal itself, and then I turned the reduction settings to the most extreme available. The result of this sounds like a spectral music hot shower, which is why I called it "Showa," the Japanese loanword from English for "shower."
I find that files with a lot of nested redundancy, but overall a lot of variation on the macro level - as well as big chunks of pre-compiled data put together with uncompiled relationships - work wonderfully for this kind of process. Reaktor .ens files, Photoshop .psd files, and any big chunk of data from a game engine like Unreal Engine or Unity are particular favorites of mine.
In general, most files will just produce short bursts of noise. Text files, images, simple file formats like that will give you just a few milliseconds of static. But, based on some experimental principles worked out by a friend of mine, you can do things like make a high-frequency sawtooth wave by pasting "abcdefghijklmnopqrstuvwxyz" 44,100 times into a plaintext file; or even just a constant DC offset by writing millions of "j"s into a file. There is still much exploration left to do with this form of content mining.
Do be advised that this process is not optimized to live in musically acceptable frequency ranges by default - some files will indiscriminately throw a near-full scale 18KHz at you, for instance.
I'm not sure what to do with these pieces of music, but I love them. Maybe they will live on as sample fodder for my future songwriting efforts or for others to sample, maybe they will be left as-is... Hell, I'd just as easily press them on vinyl a la Smojphace.
Welcome to the new version of nicksuda.com, enabled gracefully by Squarespace. While I had little issue with BlueHost as a server, and I still highly recommend them if you just want to hold a site without any very large files as a hosting platform, I had to admit that I'm no website designer. And thus, this new site where content can come first instead of awful, thoughtless formatting.
Hope you enjoy your stay!