AI and moral injury

As generative AI and LLMs seem to take over the world, and as someone who has significant ethical concerns about these technologies, I’ve been feeling quite despondent lately. A colleague with similar values accurately described it as “existential depression” - and then earlier this week I read this excellent article by Krisztina Csapo, which provided the concept of moral injury:

the deep existential wound that arises from witnessing, participating in, or feeling complicit within systems that cause profound harm while betraying core values.

Yup. 100%.

Because of course, these emotions that I’m grappling with aren’t just about AI - so much of our world is struggling at the moment, from climate change to fascism to genocide, and it all feels intertwined. AI is just one of the more prominent components occupying my brain at the moment (which is, almost certainly, a very privileged position to be in).

Also, as Csapo discusses in their post: when looking at the world, grief is a totally rational response. I think some of what I’m grappling with is grief for both the world that was, but also at a smaller level, at what my industry, my working life was. Part of that is because AI is everywhere right now, and especially within the tech industry. It’s an invasive weed, a virus in a largely-unvaccinated world. Even in the moments where I ponder changing careers - where would I go that isn’t already overrun by this virus?


I wouldn’t blame the broader world for feeling a touch of schadenfreude towards the tech industry at the moment. Tech workers in the Global North - particularly those who lean into the Silicon Valley/venture capital mindset - are responsible for a great deal of tenuous working conditions (often known as ‘disruption’ 🙄) for so many others. And now, with mass layoffs in the tech industry, perhaps some are feeling it’s what is deserved?

That’s not true of course - no one deserves to be put in positions of financial or emotional distress, to be pushed into homelessness when a corporation decides you’re surplus to its requirements because an AI can do the work instead.

But for those of us who are now feeling shaken by the challenging job market and working conditions: can we take this industry hardship as a nudge to get over our exceptionalism? To embrace a class consciousness that’s long been missing in tech spaces?


In Henry Desroches’ recent (excellent) essay A Website to Destroy All Websites, he highlights Ivan Illich’s concept of radical monopoly:

that point where a technological tool is so dominant that people are excluded from society unless they become its users.

[…]

We can map fairly directly most technological developments in the last 100 (or even 200) years to this framework: a net lift, followed by a push to extract value and subsequent insistence upon the technology’s ubiquity.

The automobile and the Internet are both offered as examples - and while Desroches doesn’t call it out, my brain latched onto AI as another technology that is arguably following that trend, and at an accelerated rate.

And this eager adoption (and insistence on use) is happening despite the fact that LLMs are tools built through exploitation (of people, of the environment, of our digital commons), and are used for further exploitation.

Something seems to be deeply amiss in what we imagine our tools are for. […] I’ve watched as new technologies - particularly the most novel and ‘intelligent’ ones - are used to undermine and usurp human joy, security and even life itself. (Ways of Being - James Bridle)

There is a dream that is often offered to the altar of AI: that we can work less, for the same effort and pay, and have more spare time for actually living our lives. Perhaps this happens for some people, but the vastly more common situation from all of these AI-driven workplace changes seems to actually be: you will work more rather than less (if you’re lucky to have a job), and you will build more wealth and power for billionaires.

Of course, we live in a capitalist system - my idealism had me forgetting that this trend is nothing new. As Talking Heads have been singing for several decades: same as it ever was.


My friend Paul Campbell mused a while ago on Mastodon that AI has strong parallels to plastic - they’re both incredibly convenient, and both incredibly environmentally destructive.

This analogy feels pretty appropriate to me, though I think it can be taken further - perhaps LLMs are the cognitive equivalent to microplastics. We’re still learning of the full, negative effects on our health courtesy of the latter - the same could be said for the impact of AI and LLMs on our memory and learning skills.


I find myself saddened by how so many of my friends and peers have leant into using (and loudly promoting) AI/LLMs. Perhaps that sadness is stronger than I should be? I recognise that we all have our own boundaries, compromises and challenges to work through - as the saying goes, there’s no ethical consumption under capitalism - and we are all imperfect human beings. I am definitely no exception to that rule. And I can’t blame people for wanting shiny new technologies, for wanting things to be easier.

This sadness has latched onto me more tightly than others though…

Other systems of exploitation in our lives - such as those involving fossil fuels, or food supply chains that torture animals - these have existed for decades, if not centuries. But LLMs aren’t a long-standing technology. We’re talking a handful of years of mainstream use, maybe a decade at most - and the flaws of these technologies and the companies behind them have been clear for just as long. It’s a question that my friend Jan Lenhardt has considered as well - we’re getting in on the ground floor here, we don’t have generational baggage and long-standing societal bad habits. And yet, despite the widely documented problems, we are embracing LLMs so enthusiastically?

Ah, but capitalism is comfortable for the privileged if you ignore the exploitation (and for those without privilege, it’s just business as usual). I shouldn’t be surprised. Still, Larry Garfield’s post on hearing the constant refrain of “it is what it is” rings true - apathy from any one person is heartbreaking. To rub salt in the wound, this compounding, collective apathy erodes our individual agency.


Where do we go from here? Look, I’m not sure why you were expecting a random blog post to have any meaningful answers. I’m not writing this to absolve people, nor to make peace with the situation.

Part of me wants to be more understanding about those who choose to use LLMs. We’ve all got to pick and choose the battles we can take on, and if this isn’t one you can grapple with right now, that’s life, and I get it. Another part of me wants to maintain the rage, particularly at anyone who’s happily cheering for a future dominated by AI and LLMs. If you talk to me about such matters, I can’t promise that I’ll be patient enough to take the ‘high’ road.

For me - writing this out, connecting some dots, understanding my feelings a little more - it’s provided a good reminder that that AI/LLMs aren’t the root cause here. Instead, we’re facing two long-standing, entangled systems - capitalism and colonialism - that reliably offer a carrot (convenience) and stick (exploitation) approach to many of us in the Global North, with the promise that ignorance is bliss.

If you’re still here and looking for ideas, here’s what I plan to do (if it resonates with you too, that’s just a nice bonus): express gratitude more often, appreciate art and support artists, show up for my friends and family, connect meaningfully with my colleagues, contribute mutual aid for those in need, and stand in solidarity across intersections.

More directly on that last point: Love and support for trans folks. Blak & Black lives matter. Free Palestine. Fuck fascists.