Shortly before I returned to CalArts from Berlin, I fell into a trap of fancy that I haven't visited since 2010: raw data experiments. The act of taking the raw binary of non-audio file formats - be they compressed, pre-compiled, or just plain text files - and encoding them as digital audio of varying spatialization, bit rate, sample rate, dithering technique etc. using Audacity. This is a great and seriously unsung feature of Audacity, and I love using it.
The big difference between last time's go and this one was that these ones were a bit more composed. The previous three ("FingerDIFFPARAM," "Aerobic," and "M1A1 Shapes") were purely expeditionary; they are each one complex computer file that, when encoded as digital audio, resulted in a song-length track of beautifully organized, circuit bent-sounding noise music.
With these new tracks, they are edited together and otherwise processed from several files. In "Rawgust," 4-5 medium length sound files were nested together to make one larger form whose repetitions appeared to be in song form. In some cases, I couched other files inside of longer files that had a perceptually silent gap in them on the order of a couple minutes.
In "Save0," the fragments were much more granular. Files between a tenth of a second and 10-15 seconds are edited together in rapid succession to make larger phrases.
In "Showa," I combined this favorite technique of mine with another: abusing de-noising algorithms. Finding two different renders of a file I liked quite a bit, but which had too much overall random, broadband noise, I fed them through an experimental build of iZotope RX3 in which the noise profile the plugin is fed is pretty much the entire signal itself, and then I turned the reduction settings to the most extreme available. The result of this sounds like a spectral music hot shower, which is why I called it "Showa," the Japanese loanword from English for "shower."
I find that files with a lot of nested redundancy, but overall a lot of variation on the macro level - as well as big chunks of pre-compiled data put together with uncompiled relationships - work wonderfully for this kind of process. Reaktor .ens files, Photoshop .psd files, and any big chunk of data from a game engine like Unreal Engine or Unity are particular favorites of mine.
In general, most files will just produce short bursts of noise. Text files, images, simple file formats like that will give you just a few milliseconds of static. But, based on some experimental principles worked out by a friend of mine, you can do things like make a high-frequency sawtooth wave by pasting "abcdefghijklmnopqrstuvwxyz" 44,100 times into a plaintext file; or even just a constant DC offset by writing millions of "j"s into a file. There is still much exploration left to do with this form of content mining.
Do be advised that this process is not optimized to live in musically acceptable frequency ranges by default - some files will indiscriminately throw a near-full scale 18KHz at you, for instance.
I'm not sure what to do with these pieces of music, but I love them. Maybe they will live on as sample fodder for my future songwriting efforts or for others to sample, maybe they will be left as-is... Hell, I'd just as easily press them on vinyl a la Smojphace.