24 JUNE 2024
Destroy AI
I’ve been struggling to articulate this idea, and maybe the problem is that it’s actually kind of simple once you put it out there, and there’s really no good reason to unpack a whole case for it once you put the thought on paper.I’m gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm.
In other words, I want us to internalize and develop a more rigorous appreciation of those who fuck up AI and its supporting systems.
With hegemonic algorithmic systems (namely large language models and similar machine learning systems), and the overwhelming power of capital pushing these technologies on us, I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.
CHI and FAccT have failed to meaningfully respond to the deskilling of creative labor; or to the environmental or humanitarian crises these systems cause or exacerbate around the world; or even to the co-opting of our spaces by grifters and con artists making up “probabilities of doom”. Indeed, in some ways, these spaces have avoided the difficult conversations and welcomed the nonsense in to try to avoid the anguish of facing a genocide in which we are collectively implicated, or the disillusionment of confronting our own roles as agents of the corporate states that invade, surveil, displace, and kill people.
I’m no longer interested in encouraging the design of more human-centered versions of these murderous technologies, or to inform the more humane administration of complex algorithmic systems that separate families, bomb schools and neighborhoods, that force people out of their homes or onto the streets, or that deny medical care at the moment people need it most. These systems exist to facilitate violence, and HCI researchers who have committed their careers to curl back that violence at the margins have considerably more of something in them than I have. I hope it’s patience and determination, and not self-interested greed.
Regardless, there’s no way to make the administrative and bureaucratic systems of apartheid and violence more humane for the people subjugated by that system.
I think we must forcefully put on the table the possibility that we will destroy systems that fail to make a compelling affirmative case for their existence. That threat must be credible. We should actively undermine and sabotage systems, and recognize that labor as a moral project that we engage in, the way luddites sabotaged machinery that tore people apart.
This isn’t really a post or a conversation for the HCD or HCI community. I’ve grown weary (and wary, I suppose) of the design community because they ultimately seem committed to … designing systems - an ideological project antithetical to this one.
If you think of yourself as a member of that community most people call “design”, I would ask you to pose a few challenging questions to yourself. Start with these:
Do you work with systems, or people? Which would you follow, if the two paths diverge? What if if they’re in conflict and you can only follow or defend one? If you see a system dismantling a human being’s life, do you think that the system must be fixed, or that the system must be destroyed?
I wanted to end with a few positive notes: I like projects like Glaze at the University of Chicago, and while I’ve been trying to write a piece about this I came across a manifesto on mastodon that I thought was very cool and uncannily relevant. I’ve also seen some really rad indigenous art, and I know that there are other works that have explored the general space of destroying, sabotaging, and poisoning datasets and whatnot.
I’d be surprised if nobody has ever thought to put this constellation of ideas together in the same blog post - please hit me up if you’ve seen additional thoughts like this. I’d love to hear from other people who are thinking about stuff like this.
If you’ve been thinking along these lines, or if you’re one of the people I’ve linked to, then please take this as encouragement and an invitation to be in dialog with the other stuff I’ve written about and pointed to, because I think resistance is necessary, and mustn’t be captured by the design-brained people.
If you have something to say about this, contact me

Hi, that’s a picture of me.
I study human-computer interaction - more precisely, I study how people relate to individual algorithmic systems and with algorithmically mediated social ecologies by adapting theoretical lenses and frameworks originating in the social sciences to understand these phenomena.
That’s a little dense. Let me try again…
I mostly think about problems situated in technology - how gig work disempowers workers; why AI makes so many frustrating errors at the margins; or how the power AIs wield allows them to get increasingly unhinged from the reality we live in. I try to figure out ways that we can think about those problems essentially as social relationships between people and systems that do things people used to do. Most of the time I do this by adapting and extending theories from other fields - fields that mostly study relationships between people - to help frame how we make sense of these problems.
Coming up with robust framings this way does a couple of useful things. First, it makes explicit that these “technical” problems are actually social problems, and that we should study them with commensurate depth, care, and attention to the histories and lived experiences of people. Second, it gives us a rich vocabulary and body of scholarship to inform how we can make sense of these complex technical structures as cultural structures.
I think that those two things together make it harder for advocates of complex technologies like AI to argue that they (or other technical experts) are the only ones equipped to discuss these things. I ultimately hope to advance a narrative that represents these technologies as artifacts of culture, and as things that we all experience and therefore have status to think about and critique.
I was interim director for the Center for Applied Data Ethics at the University of San Francisco. Before that, I studied Computer Science at Stanford while pursuing a PhD, working with Michael Bernstein.
I earned my B.A. in Anthropology & B.S. in Informatics, specializing in human-computer interaction, both from UC Irvine in 2014. I wrote an honors thesis on the culture of Quantified Self while I worked under Tom Boellstorff.
If this isn’t enough info, click on some links floating around this text in various places (determined mostly by the size of your browser).