Production Retrospective: Cut the Sky’s Throat
“Where the magic happens”
I wanted to write a bit about the process that went into Cut the Sky’s Throat—the album “embedded” in the audio edition of Party At The World’s End.
In-world, the record is a diegetic artifact—an ARG-ish “found album” by a fictional band.
Out-of-world, it’s me trying to make something that can sit inside the story but still hold up if you hit play with zero context (which is basically how I approach all the Fallen Cycle “artifacts,” if I can help it).
Although the album is part of the larger audio production for Seasons 5–6 of the Fallen Cycle fiction podcast, I still wanted it to function as an album, not just a prop.
A quick note: I’ve written elsewhere about the specific ways I’m interested—and not interested—in machine learning / diffusion / the big bucket people keep calling “AI,” much to my general displeasure. The following is just a process journal: what I tried, what broke, what I kept, and some thoughts looking back on it.
Most tracks began as source stems or demos—recordings from a long time ago (Babalon, subQtaneous, etc.).
The first stage was basically taking that old material and iterating each track using a bunch of different generative tools off those source stems.
My initial impulse to say “good enough” and dive into putting together the podcast itself, but I decided I wasn’t happy enough with that as a finished product, even after iteration.
It did produce a big old pile of options—alternate takes on the same underlying material.
I also flirted a bit with the more heroic option—reperforming / re-recording everything I could by hand and pulling in session musicians for the rest—but practically, financially, and creatively it didn’t feel like the right move for this particular artifact.
In other words, I realized that for an album embedded within a fiction podcast, that was probably fucking bonkers.
Back to tinkering. The real work in this phase was selection: sifting through those outputs until it was weeded down to what I was looking for, then splicing and arranging what remained into a workable template.
In other words: the song structure, most of the harmonies/melodies, and some of the hook moments were inherited from earlier recordings—but musically, the album version was ultimately the result of curation, recombination, audio to MIDI, retooling the note content, and then MIDI to audio with virtual instruments / VSTs.
For lyrics, I leaned (again) into cut-up methods—Bowie/Burroughs style—pulling passages from the novel and recombining them until something coherent (and singable) surfaced. It’s a bit more wordy than most music of its type, but given the psychedelic rock opera vibe of the whole project, it felt right to keep it that way rather than paring it down further.
A few key lines from the original tracks did stay in place because they were part of the initial PATWE concept, or maybe because it amused me to keep a couple of those fossils intact.
Probably a bit of both.
Once I moved into actual production, I immediately hit the current limits of stem separation.
For straightforward play-alongs, separation can be usable because it can be a little wonky; the point is to play your part live along with it.
But for mixing/production, it’s just not there yet. Weird artifacts, warbling. Some stem splitters were better than others but none of them were good enough out of the box to call it a day and get right into editing and mixing.
Predictably, wrestling with that constraint is where the process shifted into accepting it as reference and source: the “underpainting” that you gradually pull away.
I started thinking of it more like building within the structure of the reference and gradually pulling that “underpainting” away. Incidentally, this has been my primary methodology for digital visual art for two decades now, which is probably why the approach occurred to me.
There it’s digital “brushes” (inks, color pencil, paint, etc.) on top of pencils, or photobashed source that’s gradually increased in transparency as you build up and revise on top of it.
Once I made that mental shift, the project became tractable. That said, some medium differences cropped up—for instance, phase issues became a problem with some of the layering approaches I played with. (There isn’t really an analog for that with digital painting.)
I worked against that constraint in various ways, mostly by banging around until something worked, and at a few other points, by saying “fuck it, good enough.”
In practice, this approach meant splitting the project into vocals and “music,” and treating the separated music as a reference bed rather than a mix foundation.
After sufficient iteration and splicing, vocals were mostly either usable outright or replaceable. Given the Lilith (lead vocalist’s) concept in the narrative—a larger-than-life doppelganger/chameleon rock star—it seemed like something to exaggerate rather than shy away from. Delays, chorus, some truly unnatural reverbs: all of it accentuated that slightly synthetic vibe.
I also pushed the vocal identity to shift a bit track to track, almost like each song is a different facet of the same persona. Again, it matched the brief in this case.
Once the tracks started to take form, I did the usual mixing SOP: setting up groups/busses, sends, glue compression—the same infrastructure I’d use on any other record. (Then you go way too far with layering VSTs until your CPU cries out for you to just fucking kill it already, and you pull them all off the track and start over. Y’know. The usual.)
When there was usable note data, I used audio-to-MIDI to extract it, tweaked the parts as necessary, and recreated them with virtual instruments. (I got past spending a lot more than I might have otherwise with some VSTs, plugins, apps and etc not already in my library with trial licenses, which is good because I did a whole lot of experimenting and probably would have wasted a lot on options I wound up not using, or only using a little.)
Sometimes those replacements stayed faithful; sometimes they mutated into something that I thought was better. Sometimes it was a bit of both. If I liked how it sounded, it stayed. You can hear a few “saxitar” lines in “Hungry Star,” for instance. I pushed recreated / virtual instruments through the same busses and sends so the retained bed and the rebuilt parts shared processing and “space.”
That workflow is adjacent to how a lot of modern drums get layered: use a base live mix for continuity, then reinforce or replace what needs filling in with samples.
Drums were really the easiest problem in this approach—lots of transients, easy to capture—so I could often double generated stems or cut and loop the parts that needed changing without the whole thing collapsing. There are a few exceptions but my usual approach has always been to treat the drums/percussion like the first-layer structural component, so this was familiar territory.
Bass was a mixed bag. I initially planned to retrack more myself, probably mostly out of habit, but ultimately did less that way. My ideal choice for this project would have been my Warwick 5-string corvette, but… the wiring (yet again) needs replacing before it’s recordable. My 70s fender jag J-bass worked in a few spots, but it didn’t have the right character for a number of the tracks—and truth be told, though it’s more serviceable for tracking at the moment, it has its own issues that need looking at.
(I’ve got to get them and my PRS to the shop sooner rather than later before I get into my next musical malfeasance, but that’s another story).
My ego aside, virtual instruments have gotten very good for a purely “studio” project like this, so if anything that helped keep my head very much in “producer mode.”
After the rebuild strategy clicked, and I got past various frustrations trying to piece parts together this way, it turned back into the familiar post-production routine: balance decisions, EQ/compression, spatial effects, and the slow work of trying to make the mix translate.
And, yeah. I’ve been trying to go a little less crazy with plugins in general, but this was not the project where that resolution entirely held. As Saint Augustine wrote, “Lord, give me chastity and continence... but not yet.”
I saved myself from total madness with a little heuristic—once a track hit well enough to get my head nodding along, it was time to put that track aside for mastering and move on. I know all too well how easily I could spend a year going into the weeds on something like this, especially when the process itself is entertaining.
This gets to one of my first reflections on process: adapting process with intentions.
Ideally, all parameters should be project-specific. There’s not a right way, singular. There’s a way that produces more of X and less of Y, which in the right context is the better way.
(Which isn’t exactly as pithy, I admit.)
For example, with the Tomorrow’s Forgotten Relics EP, one of the parameters I set in my head going in was “no loops, all live instruments.”
As that was my “pandemic album,” maybe it was partially driven by watching all the live performances bands were doing from different locations (dealing with latency must have been a nightmare in some cases), but I wanted it to have that sort of liminal-space version of a live performance at that time—everyone cloistered in their little video chat window.
Harder to explain now that I try to do so, but it was very much the vibe.
Though we did bend the rules slightly with MIDI sequencing of some of the keyboards / synths (“they’re more guidelines than rules…”), for the most part I stuck to it, and the constraint shaped the material. I didn’t even punch in much.
There is a kind of collision of synchrony and desynchronization that runs through that EP, and some people have liked it and some people have been like, “all of it sounds weird, man.”
I don’t know what to say to that other than: it wasn’t a dead-on bullseye, but it was close to the mark I was shooting for. Not much else to say.
With Cut the Sky’s Throat, the challenge was coming up with a production approach that would match the concept and meet the hard constraints of “within six months” and “shoestring budget.”
Given that, I’m more happy than unhappy with how it turned out, which is typically the best I’ve found you can hope for.
There’s a Wizard of Oz quality to a production like Cut the Sky’s Throat—and leaning into that rather than running from it was part of the fun for me here. Creating the illusion of an entire band when it's just you behind a curtain (sucking on a thc vape pen and twisting knobs and dials) feels a bit like an indulgence, but when a project affords you that opportunity, you know? When in Rome.
A core theme in the book is malleable identity, and the production method fit the theme instead of fighting it.
A few more thoughts about intentions and constraints.
If I’m working on a project where the thing is hand-drawn with ink, then that’s the thing.
But I don’t see that as a virtue in itself. I’ve never understood the mentality that a drum machine is less real than a v-drum is less real than a kit is less real than a drum you hand-carved yourself.
I can see reasons why you might want to do any one of those things.
Another example comes to mind. For a lot of visual work I use a stylus and digital “ink”: which means limitless undo (ctrl+z), nearly limitless layers, clean revisions, and as a lefty I’m not dragging my hand through wet ink.
The downside is the price of having more options—if you don’t know how having too many choices can be a problem, then clearly you’ve never worked on a painting or album or novel you obsessively reworked for years. (And still weren’t fully happy with.)
There is a whole essay (or book) hiding here about the inevitable tradeoffs within any technology, but suffice to say for the purposes of this post that every methodological choice has downstream effects. This isn’t really a moral claim. It’s an aesthetic one. Neither is universally better or worse.
Critiques that sound to me like “this steak contains too much meat” never made much sense to me. Intention isn’t everything in the creative process, but it’s definitely something.
A few conclusions present themselves from this experiment.
First off, this isn’t how I’d want to do every album. It was a lot of fun for what it was—approaching it as a producer, and letting the artifact concept justify the pipeline—but although I’m starting to hit some physical limits that weren’t an issue so much when I was younger, for the time being at least, I think I would get bored of using a fully synthetic approach on every audio project.
Honestly, this applies equally to all programming, whatever the method of input. At least as I do things, I’m not sure there’s much difference at least at the production level between this process and editing MIDI sequences presented from any number of other methods (from chord-change or rhythm-pattern packs to algorithms, etc.).
There was a time when I was still taken by the idea that it mattered how fast I could move my fingers. Like that was something to prove.
I’m not knocking the sense of mastery I got when I learned my first Dream Theater song. But virtuosity isn’t the only game in town, and it’s never really been mine if I’m being honest.
So it’s not that. The fact is I still enjoy playing parts with my hands a bit too much to stop.
Distribution also added an annoying constraint towards the end of the process. This is actually probably the largest flaw in the project, to me anyway.
To get 48k masters to streaming via DistroKid, I had to use their third-party mastering option.
The results were a little “clearer” than my own masters, but at the expense of sharp high-frequency stuff that bugs me in certain sections—open hi-hats, heavy crash riding, certain guitar frequencies all make my teeth itch above a certain volume threshold. It’s not terrible, but it’s not ideal either.
Inside the podcast itself, I use a more balanced (and slightly quieter) master that matches the episode mix better anyway. And the podcast is free for anyone who wants it.
Know when to hold ‘em and when to fold ‘em, right?
Ultimately, the method did what I needed it to do. It served the conceptual brief, kept the artifact self-sustaining as music, and let me finish without turning it into a multi-year perfection spiral.
A secondary goal was simply to make an album I enjoyed making and would listen to from time to time.
On that count, it worked.
So while I hope it is as enjoyable for other people who encounter it down the road, for my part, if I’m doing anything right at all… I’m already on to the next thing.