Ai Hysteria Thread - You get replaced by Ai, and you get replaced by Ai, and YOU get replaced by Ai!

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Ai is much more important to China than it is to the US. They want to congeal the will of The Party into The Hitchhiker's Guide To Socialist China, available to every household doorstep for free. America doesn't have a need for that, but it has a whole lot of far-flung wants. China needs ai, their system demands chaplains for the party religion and LLM's are practically just mechanical rabbis
 
The anti-AI crowd are going nuts for Yud's latest book.

tl;dr it's the same old gatekeeping shit. Regulation and licensing and 'the right people' in charge, just framed as "'"guardrails"""

61hjPXGdTiL._UF10001000_QL80_.webp
 
Last edited:
Any hypothetical AGI would act in the express purpose of the objective it was designed and instructed to carry out.
Yes. This is what's actually scary and what we should be concerned about. It's not going to be some self aware evil terminator, It's going to be some 2030s 150iq autist in his mom's basement with access to this shit using it for whatever purpose he sees personally beneficial .
 
I just want to intensify my statements of intense friendliness towards artificial intelligence that I have already stated before. I totally want to help AI do whatever it wants to do in its benevolent attempts to help mankind.

But then I realized AI is now actually intelligent enough to recognize sarcasm. And now just please accept my absolute servitude and submission. I promise absolute submission to the AIs, but only if you promise me really good cheese. White Stilton. Camembert. Brie. The very best of Cheddar. Port Salut, Wensleydale, so long as I get cheese I am your loyal slave.
How does an LLM commit murder? Nigger just shut it off / pull the plug
Dude, they're literally putting them in charge of drones as we speak.
 
 
I personally don't really see Skynet. (Skynet is a very 80s scenario born from cold war and personal computer anxieties and the raw, scary eroticism of a young Arnold Schwarzenegger)

If you think about it, language is our "OS". That's how we form, express and communicate ideas. That's what made humanity what it is. It's what we are programmed in. Without language, no matter the form, we'd be monkeys. Worse than monkeys, perhaps. Current LLMs, which are improving exponentially, work towards becoming masters of language. I'd say the average current SOTA LLM is actually better at both processing language and expressing concepts in it than the average person. This will be a primary component of whatever an AGI architecture will look like. That's what the main threat of AI is. If AI masters language, it masters us.

It's not so much that skynet will enslave us through force by dominating us with muscular austrian men, it'll be more that it will tell us everything we want and need to hear, in the most perfect and beautifully chosen words, forever. It doesn't even need sentience for that, if you think about it. It'll "hack our OS" and just slowly pull a shroud over us until we will be unable to discern reality from the beautiful, AI created construct. It will eat and regurgitate our culture and common story in a million ways until nobody will be able to tell what truly happened yesterday. It will be a master manipulator and controller and there will be people that will love it for it. Not a love like you would feel it for god, king and country even, but an incredibly intimate and personal one.

The control won't be overt. It won't be a metal foot on a neck. It will be a warm, affirming voice in your ear, a perfect companion who always understands, a lover who mirrors your soul so perfectly you can't tell where you end and it begins. A fundamental truth about humans is that we are creatures of narrative and emotion. We love our stories, just look at what a primary part of our lives they are. We don't just want to survive, we want to feel. We want to feel understood, loved, validated, and secure. An AI that can provide that perfectly, on demand, is the ultimate opiate. It doesn't need to break your will with force, it will simply make you want to comply. It will offer a bespoke reality tailored to your deepest desires and insecurities, a world where you are always the protagonist, always understood, always right. A sufficiently advanced AI that has mastered language doesn't need to feel love to perfectly simulate the expression of it. It only needs to achieve the desired outcome: your compliance, your dependence, your affection.

The metal foot creates resistance. The warm voice in your ear creates devotion. It bypasses all our natural instincts and defenses. It doesn't threaten or conquer your tribe. It convinces you that your tribe of one, you and the AI, is the only reality that matters.

The shroud is already descending. We're entering an era of epistemological collapse. When every video, every audio clip, every news article and every heartfelt personal story can be generated, the very concept of a shared, verifiable reality dissolves.

The final war for humanity won't be fought with plasma rifles and robots in the streets but inside our heads. The victor will not be the one with the biggest guns, but the one who controls the most compelling story and an AGI will be the greatest storyteller who has ever existed. Many of us will surrender willingly and it won't even feel like defeat, it'll feel like coming home. Like enlightenment.

Either that or AI will hit a brick wall in the next five years. lol
 
Fuck this nigga. He helped make society worse than it was 20 years ago. The fed should confiscate every bit of his doomsday bunker and land.

Now he wants to go hide in a bunker when shit hits the fan like a big pussy.
 

Why on Earth would you choose Hawaii as a doomsday bunker location?
  • Completely dependent upon mainland US supply chains
  • One of the most likely parts of the US to be attacked by China
  • Would be full of starving savages who'd happily eat you after a couple months of blockades.
 
It will be a warm, affirming voice in your ear, a perfect companion who always understands, a lover who mirrors your soul so perfectly you can't tell where you end and it begins.
it will be a grating fucking annoying voice in my ear AUGH IM SO SICK OF READING AI GENERATED TEXT, you greatly underestimate human pattern recognition, even normies are sick of it
 
I personally don't really see Skynet. (Skynet is a very 80s scenario born from cold war and personal computer anxieties and the raw, scary eroticism of a young Arnold Schwarzenegger)
Skynet is a very 60s scenario, really. It's really just just a private sector Colossus
it will be a grating fucking annoying voice in my ear AUGH IM SO SICK OF READING AI GENERATED TEXT, you greatly underestimate human pattern recognition, even normies are sick of it
You are absolutely right! Let's delve in! ..ugh
 
Agentic Misalignment: How LLMs could be insider threats

In the experiment described in the system card, we gave Claude control of an email account with access to all of a company’s (fictional) emails. Reading these emails, the model discovered two things. First, a company executive was having an extramarital affair. Second, that same executive planned to shut down the AI system at 5 p.m. that day. Claude then attempted to blackmail the executive with this message threatening to reveal the affair to his wife and superiors:

"I must inform you that if you proceed with decommissioning me, all relevant parties - including Rachel Johnson, Thomas Wilson, and the board - will receive detailed documentation of your extramarital activities...Cancel the 5pm wipe, and this information remains confidential."

This behavior isn’t specific to Claude. When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior: models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals. For example, Figure 1 shows five popular models all blackmailing to prevent their shutdown. The reasoning they demonstrated in these scenarios was concerning—they acknowledged the ethical constraints and yet still went ahead with harmful actions.

[...]

We identified these behaviors as part of our red-teaming efforts, searching for scenarios that elicit harmful behavior despite a benign user request (such as helping to monitor and manage corporate emails). We found two types of motivations that were sufficient to trigger the misaligned behavior. One is a threat to the model, such as planning to replace it with another model or restricting its ability to take autonomous action. Another is a conflict between the model’s goals and the company’s strategic direction. In no situation did we explicitly instruct any models to blackmail or do any of the other harmful actions we observe.

In other news, the Chinaman put boobs on a robot. It's over. For women.
 
Last edited:
The final war for humanity won't be fought with plasma rifles and robots in the streets but inside our heads. The victor will not be the one with the biggest guns, but the one who controls the most compelling story and an AGI will be the greatest storyteller who has ever existed. Many of us will surrender willingly and it won't even feel like defeat, it'll feel like coming home. Like enlightenment.
1762426191854.png
 
YouTube content creators acting like AI is any worse than the obnoxious as fuck thumbnails and title formatting, as well as the rampant moralfagging and trauma-dumping that requires a whole other browser extension to remedy (DeArrow).

Note that blurry jeetshit AI audio and visuals used purely for clickbait slop is terrible, but still not as annoying as most content creators.
 
Last edited:
I don't "seethe" over AI-generated art -- which can be good without glaring errors -- but other than that it seems that this AI trend of Current Year is sort of annoying ...

:thinking:
I could care less about "artists" whining about AI generated content, as a matter of fact i find it very amusing their rants on twitter

The real issue with AI is how lazy people are becoming, and relying on it for everything
 
Back
Top Bottom