Destroy AI - "We should actively undermine and sabotage systems, and recognize that labor as a moral project that we engage in, the way luddites sabotaged machinery that tore people apart."

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account

24 JUNE 2024​

Destroy AI​

I’ve been struggling to articulate this idea, and maybe the problem is that it’s actually kind of simple once you put it out there, and there’s really no good reason to unpack a whole case for it once you put the thought on paper.

I’m gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm.

In other words, I want us to internalize and develop a more rigorous appreciation of those who fuck up AI and its supporting systems.

With hegemonic algorithmic systems (namely large language models and similar machine learning systems), and the overwhelming power of capital pushing these technologies on us, I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.

CHI and FAccT have failed to meaningfully respond to the deskilling of creative labor; or to the environmental or humanitarian crises these systems cause or exacerbate around the world; or even to the co-opting of our spaces by grifters and con artists making up “probabilities of doom”. Indeed, in some ways, these spaces have avoided the difficult conversations and welcomed the nonsense in to try to avoid the anguish of facing a genocide in which we are collectively implicated, or the disillusionment of confronting our own roles as agents of the corporate states that invade, surveil, displace, and kill people.

I’m no longer interested in encouraging the design of more human-centered versions of these murderous technologies, or to inform the more humane administration of complex algorithmic systems that separate families, bomb schools and neighborhoods, that force people out of their homes or onto the streets, or that deny medical care at the moment people need it most. These systems exist to facilitate violence, and HCI researchers who have committed their careers to curl back that violence at the margins have considerably more of something in them than I have. I hope it’s patience and determination, and not self-interested greed.

Regardless, there’s no way to make the administrative and bureaucratic systems of apartheid and violence more humane for the people subjugated by that system.

I think we must forcefully put on the table the possibility that we will destroy systems that fail to make a compelling affirmative case for their existence. That threat must be credible. We should actively undermine and sabotage systems, and recognize that labor as a moral project that we engage in, the way luddites sabotaged machinery that tore people apart.

This isn’t really a post or a conversation for the HCD or HCI community. I’ve grown weary (and wary, I suppose) of the design community because they ultimately seem committed to … designing systems - an ideological project antithetical to this one.

If you think of yourself as a member of that community most people call “design”, I would ask you to pose a few challenging questions to yourself. Start with these:

Do you work with systems, or people? Which would you follow, if the two paths diverge? What if if they’re in conflict and you can only follow or defend one? If you see a system dismantling a human being’s life, do you think that the system must be fixed, or that the system must be destroyed?



I wanted to end with a few positive notes: I like projects like Glaze at the University of Chicago, and while I’ve been trying to write a piece about this I came across a manifesto on mastodon that I thought was very cool and uncannily relevant. I’ve also seen some really rad indigenous art, and I know that there are other works that have explored the general space of destroying, sabotaging, and poisoning datasets and whatnot.

I’d be surprised if nobody has ever thought to put this constellation of ideas together in the same blog post - please hit me up if you’ve seen additional thoughts like this. I’d love to hear from other people who are thinking about stuff like this.

If you’ve been thinking along these lines, or if you’re one of the people I’ve linked to, then please take this as encouragement and an invitation to be in dialog with the other stuff I’ve written about and pointed to, because I think resistance is necessary, and mustn’t be captured by the design-brained people.



If you have something to say about this, contact me


1719344618633.png

Hi, that’s a picture of me.

I study human-computer interaction - more precisely, I study how people relate to individual algorithmic systems and with algorithmically mediated social ecologies by adapting theoretical lenses and frameworks originating in the social sciences to understand these phenomena.

That’s a little dense. Let me try again…

I mostly think about problems situated in technology - how gig work disempowers workers; why AI makes so many frustrating errors at the margins; or how the power AIs wield allows them to get increasingly unhinged from the reality we live in. I try to figure out ways that we can think about those problems essentially as social relationships between people and systems that do things people used to do. Most of the time I do this by adapting and extending theories from other fields - fields that mostly study relationships between people - to help frame how we make sense of these problems.

Coming up with robust framings this way does a couple of useful things. First, it makes explicit that these “technical” problems are actually social problems, and that we should study them with commensurate depth, care, and attention to the histories and lived experiences of people. Second, it gives us a rich vocabulary and body of scholarship to inform how we can make sense of these complex technical structures as cultural structures.

I think that those two things together make it harder for advocates of complex technologies like AI to argue that they (or other technical experts) are the only ones equipped to discuss these things. I ultimately hope to advance a narrative that represents these technologies as artifacts of culture, and as things that we all experience and therefore have status to think about and critique.



I was interim director for the Center for Applied Data Ethics at the University of San Francisco. Before that, I studied Computer Science at Stanford while pursuing a PhD, working with Michael Bernstein.

I earned my B.A. in Anthropology & B.S. in Informatics, specializing in human-computer interaction, both from UC Irvine in 2014. I wrote an honors thesis on the culture of Quantified Self while I worked under Tom Boellstorff.

If this isn’t enough info, click on some links floating around this text in various places (determined mostly by the size of your browser).
 
"Destroying" this technology is impossible. You could only theoretically prevent it domestically. That won't stop China and Russia from doing it. "Destroying it" would naively just make those countries the world leaders behind this new technology. Akin to how the USA has been the world leader in the overall computer industry. Future generations would decry this sense of moralistic naivete because of how they'd need to use entirely Chinese AI apps instead of American ones.
 
"Destroying" this technology is impossible. You could only theoretically prevent it domestically. That won't stop China and Russia from doing it. "Destroying it" would naively just make those countries the world leaders behind this new technology. Akin to how the USA has been the world leader in the overall computer industry. Future generations would decry this sense of moralistic naivete because of how they'd need to use entirely Chinese AI apps instead of American ones.
Foes every policy have to be decided on the basis of if someone else does it instead? Better start putting shit in all the food to save money. We wouldn't want to fall behind!
 
I’m okay with this. Socially destroy the AI wagon and leave it to the professionals. Get rid of the corporate takeover. It would be best if we as a society just forget AI exists all together.
 
Foes every policy have to be decided on the basis of if someone else does it instead? Better start putting shit in all the food to save money. We wouldn't want to fall behind!
Its more akin to nuclear weaponry. "If we don't build one, our enemies will and they'll use it against us".
 
This worked out real well for the Luddites after all. We still don't use mechanical looms to this day!
 
Get rid of the corporate takeover. It would be best if we as a society just forget AI exists all together.
This would accelerate the corporate takeover. It's not the companies the legislation against ai shit is affecting, it's fucking people messing with it getting hounded by corpos and "unions" that claim to be against it but later get outed using it. It's all fake and gay and I hate it.
This "Ai" is a marketing name for algorithm shit we've had for ages but people are just using it a lot more and for weirder/niche applications.
 
This would accelerate the corporate takeover. It's not the companies the legislation against ai shit is affecting, it's fucking people messing with it getting hounded by corpos and "unions" that claim to be against it but later get outed using it. It's all fake and gay and I hate it.
This "Ai" is a marketing name for algorithm shit we've had for ages but people are just using it a lot more and for weirder/niche applications.
I would prefer they do it in the privacy of their own bedrooms if they do commit to AI. I just want common people to shut the fuck up about their new AI integrated bullshit scam.
 
I would much rather the "common people" get full use of AI to make random shit for fun including when stomping over other peoples copyrighted material because I don't respect intellectual copyright laws (I think they are gay) and governments and corporations are banned from using AI.

I also think there are a lot of jobs that should be replaced by robots, assembly line work, call center customer service work, these are undignified jobs. Men should spend their days touching grass and murdering each other not these lame jobs.
 
I would prefer they do it in the privacy of their own bedrooms if they do commit to AI. I just want common people to shut the fuck up about their new AI integrated bullshit scam.
I'm equally tired of people hyping up ai shit as "replacements" to shit as much as I am people fearmongering about it. It's a novelty at best, and I very much want to continue generating funny schizo rng audio and images alongside actually making my own non-ai generated stuff. Kinda wish I was able to program shit myself honestly I'd just be making the goofiest specialized image generators and chatbots if I could.

I also think there are a lot of jobs that should be replaced by robots, assembly line work, call center customer service work, these are undignified jobs.
Yeah you say that till you get fucked by the automated algorithms of the customer service constantly like I have for well over a decade lmao.
 
I would much rather the "common people" get full use of AI to make random shit for fun including when stomping over other peoples copyrighted material because I don't respect intellectual copyright laws (I think they are gay) and governments and corporations are banned from using AI.

I also think there are a lot of jobs that should be replaced by robots, assembly line work, call center customer service work, these are undignified jobs. Men should spend their days touching grass and murdering each other not these lame jobs.
Common people are cattle. They don’t know how to even use git to pull an offline model. They’re using the products sold to them and propping up a market of thoughtless consumption because they can’t be bothered to learn to code. Kill Big AI, kill the copyright.
 
Replace "AI" with "NFTs" in any modern article on AI, pro or against, and it is word for word an article from five years ago.

AI, like blockchain and NFTs, probably does have some actual use cases, but most of the groups using the term are just trying to hype up their low-quality products and companies. It's all the same grift we see every few years, from the same types of people, and using whatever new tech thing is in vogue currently. AI just doesn't significantly lead to more value added in most products, or even save much real time compared to more traditional production methods. Even if it were much more accurate than it is now, it still wouldn't improve on existing production methods in most cases.

I figure in a few years we'll have moved on to some new scam-ridden tech, like rat-neuron computers or maybe 3D Printing will finally go mainstream, and then AI will return to being the domain of those handful of companies that have an actual use for it.
 
Back
Top Bottom