The AI made me do it.
One Artist’s Statement On the Use of AI
Some thoughts. There’s so much that can and should be said on this topic, but I’d like to deal a moment with the practical dimension of engaging with technology that utilizes LLMs in one way or another, and what I see as the near-term risks, rather than getting lost in the weeds about our moral distinctions.
To begin, in the context of growing discussions about “AI,” it is critical to have clarity in terminology. Where it’s appropriate, I prefer the term Large Language Models (LLMs) over AI. This isn't only a matter of semantics, even though I'm aware that the battle over terminology may already be lost. I’ll sometimes grudgingly call it AI for the sake of clarity. because few seem to agree with me on this, but I hope it’s understood what I actually mean.
There’s a larger issue about the vaguery of the term “intelligence” that I dealt with at least in part in this Metapsychosis article, especially in terms of conjecture about AGI. We’ve a hard enough time coming to consensus about a single method for determining “I” out, so adding “AG” on top of it isn’t helping anything.
LLMs represent a specific subset of technology, whereas “AI” is broad, unclear, and often misused. This very distinction highlights the importance of human agency in utilizing these technologies.
“Artificial Intelligence” can be used to shift agency. Arguments like “The AI made me do it" obscure the reality that these systems are tools, controlled based on human decisions. It’s like a next-level version of the selective use of the passive voice used in News headlines.
The reported use of these systems to select air strike targets is the most obviously egregious type of ‘agency laundering,’ whereby humans can select from the options provided by an LLM, but there is essentially no one to blame at the end of that chain. But there are many others, such as when executives or HR make the claim that AI drove them to make layoffs. It only “drove” it in the exact same sense that capitalism itself is a form of AI. (This is an idea explored by countless authors prior to the current AI craze, from Douglas Rushkoff and William Gibson to Verner Vinge and Kevin Kelly).
Technology makes a convenient scapegoat. The tendency to offset the responsibility of leaders upon an oracle or deity is hardly a new development in society, and it is one we would be particularly wary of, if we had any sense.
Misdirecting blame towards AI for corporate decisions, such as layoffs or algorithm-based hiring, overlooks the humans making these choices. If you've wondered why so many corporations that stand to make a profit from AI seem to goad on discussions of “evil AI" and “AI run amok" as the chief risk here, this is almost certainly a part of that reverse psychology PR strategy.
We should not be surprised at the results of misuse, nor should we attribute those results to the agency of this technology, as that is not a thing that it has. It’s a bullshit machine, more or less in the sense of Franfort’s essay “On Bullshit.” Within the confines of its training, you can ask it to give you more or less of a given example, and it fills in the space.
“Persons who communicate bullshit are not interested in whether what they say is true or false, only in its suitability for their purpose”.
You might be surprised how useful bullshit can be, but it can also serve for a lie if a human being gets into the loop and chooses to use it as one.
Here again, there’s much that might be unpacked. Maybe I’ll get to it in the second season of the Narrative Machines podcast. More practically, I wanted to share briefly how I’ve been navigating this, how I currently look at my own use — as a freelancer, as an independent artist and writer, and in those lucky cases where they all intersect:
An iterative process rather than a single solution. A kind of multi-modal macro integrated as a component of a broader process, creative and otherwise. I either use it myself within contexts that I already worked prior to their use or use by others within the same context.
For example, I can’t imagine ever asking an LLM to “write a book for me.” I might ask it to help organize some rambling early notes on a subject into an unsorted list, to assist in de-essing an audio track or picking out trouble frequencies, to voice a story that I wrote, give me 10 examples of the same sentence that’s been driving me crazy for an hour, and so on.
Always check its outputs and seek to do something with those outputs afterwards that involve human eyes, ears, hands.
No particular generation is final.
Process should be tailored to suit the specific intent, medium, and end-goal of each project. Come to think of it, this is true whether or not LLMs are involved.
Scale. Available budget is a major factor in calculating “good” from “bad” use. Morally, I look very differently on a billion dollar corporation using LLMs to cut creative teams than a band who have collectively pooled $1000 to make an album, although creatively I’m more inclined to ask “...but was what they did with it any good?”
Don’t undercut your own authority. A directable automation tool used within domains you’re already familiar. If we fully automate our workflow without engagement, subpar results should be expected.
Which is to say that if you automate your entire “pipeline," simply turn the machine on and walk away, don't be surprised if all that comes out the other end is shit.
Don't use LLMs to replace people. This should be an unambigious distinction. Instead, help actual humans to use it in a way that is effective and limited to the scope of both the task and the actual facilities and failings of LLMs.
For example, the tendency to confabulate can be a facility within contexts where being presented with novel options or divergence is useful or interesting. In contexts where fidelity is crucial, general use LLMs are typically poorly suited. Even LLMs trained for specific purposes like detecting cancerous growths need to be used by a human, not used to replace a human.
Similarly, I don't use LLMs to replace people I would have otherwise hired. To the extent that it can improve what small cash-strapped indie operations can pull off, I see no less reason to use it than an iPhone or Google Analytics account.
As a case in point, I've used text-to-speech LLMs as part of the production process for the Fallen Cycle Mythos podcast, translating my writing to audio format. This is material that I wrote myself, which I'm producing and editing myself, which I’m repeatedly regenerating until I am happy with each passage, and for which I don't have the budget to hire professional voice actors, much less the number of them that it'd take to do the project properly. It isn’t perfect, but it gives me far better source to work with in Audition and Ableton than the many attempts I’ve made over the years to do this myself with a microphone.
Doing it this way has also freed me up to kick some money to the other musicians contributing soundtrack for the project, which as it's coming out of my pocket and is being used for a project that doesn't bring in revenue is a limited amount, but it is an inarguable boon so far as I'm concerned. Without LLMs quite simply the project wouldn't exist at all. This is an example of the type of context I have used them in, and in which I intend to continue using them. If I had the budget to hire a bunch of pro voice actors without going into debt, that would be the play, whether or not LLMs are used in some other capacity. This isn’t a binary proposition.
Case-by-case basis. The context of use is an important factor in determining what sort of process is appropriate. Making ‘batman eatsa the spaghetti’ memes is a very different use case than doing an art commission or designing a book cover, and those are specific to anything from the desires and budget of a client to the range of options in a particular style.
Results aren’t all the same. I don't support AI slop any more than I support non-AI slop. There are and will be many examples of this technology being used to produce substandard results. We should all endeavor to make sure that anything we put out is of equal or greater quality than anything we put out before AIs entered the picture. That’s the standard by which I evaluate every process I might involve LLMs into.
There are other ethical issues one might rase about the specific models or training data used in one case or another. Some of these are valid concerns, although they are in no way different from the scrutiny we might apply to these corporations in any other domain: Microsoft, Amazon, Apple, Meta, etc. The provenance of its training data is in many cases dubious at best — as is the case with corporate and government use of all of our data, or the various ways that resources are extracted, manufactured, sold and binned. These are not separate problems.
In other words, more pressure can and should be applied onto how corporations use this technology, but not only this technology. It’s quite possible to be critical of Apple’s corporate practices (for example) and still own an iPhone, although this is certainly a part of what we’ve each got to wrestle with and come to our own conclusions about where the line is.
Even if you don’t work with ChatGPT or MidJourney or Stable Diffusion, LLMs are a part of Illustrator, of Photoshop, of the chip inside your computer. Yes, the chip. The rise of LLMs hasn’t just used existing GPU and cloud infrastructure; it has reshaped the trajectory of hardware and data center design to accommodate their specific needs.
Looking forward, as more individuals recognize the ubiquitous nature of AI in various aspects of daily life, there will likely be a growing acknowledgment that the applications and users of this technology are diverse and not uniformly effective or harmful.
Some uses a boon, others a curse. So far as that goes, this is not unique to AI but applies to all technological advancements. A shovel can also be used to dig a foundation, or a grave, or to put someone in one. Possibly all three.
In conclusion let me just say that I'll pretty much always go to bat for people who want to be critical about AI misuse, especially when it comes to the obvious risks when leveraged by powerful corporations and individuals to their own ends. This is true of nearly all our industrial technologies. I also support those who simply don't want to use it, same as I suppose my vegan friends although I eat meat.
However, I think kneejerk fear and categorical reviling of anyone who uses the technology in any way is not only misguided, but actually goes towards supporting the types of narratives that tech CEOs seem to want us to fall for, such as that is some categorical sea-change rather than an evolution of technologies (machine learning, algorithms, etc) which have been in use for many years, and which are for better and most definitely worse a part of a much larger machine. We cannot extricate LLMs from capitalism not because of the inherent nature of LLMs but rather because of the societal context all of this is occurring within.
That may seem glib, because there's so much more to be said on that last point, but as the intention of this blog post is to talk only on the most practical side of the topic I suppose I'll leave it there, with the conclusion that I hope to find the time and energy to get into those broader issues more categorically myself.