Ai Hysteria Thread - You get replaced by Ai, and you get replaced by Ai, and YOU get replaced by Ai!

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
The real issue with AI is how lazy people are becoming, and relying on it for everything
I use it now and then for emails, mostly because it's easier to write out the initial point of the email in 10 seconds flat and then having gpt make sense of it and fix grammar as opposed to then spending 6-7 minutes trying to come up with a better wording of the same message. I won't benefit from it. Receiver won't. Company won't.
And likely continuing to use it on the job.
Why not? Their company probably paid someone to come shill Copilot to them. The whole job market is pro-AI, even for the smallest of tasks. Replace a few big words with an inferior choice and replace the double-length dash with a normal - and you're golden. Nobody can tell.

I'm the last person to shill AI in any context but knowing how little it matters in terms of emails versus how much freedom it allows in the initial draft, I'm all for it. And my nigger employer removed the Agents from our Copilot. :(
 
bk5b3prb011g1.jpg
 
To be fair, this kind of shit is something AI could reasonably do. However, at least presently, unless you had one specifically for this kind of story, it would be hard to avoid some level of hallucinations, and the time spent just checking for errors might be more than just writing the article from scratch.
 
It's actually infuriating
Yeah. We have someone who uses ChatGPT almost exclusively for generating bible study lessons. No, Bob, you didn't write this, and you couldn't make a wolf emote if your life depended on it. Don't pass it off that you're too busy to do this every week, because it took me the same two minutes to piece something together using external resources.

If you're going to lie about something, at least make it worthwhile.
 
To be fair, this kind of shit is something AI could reasonably do. However, at least presently, unless you had one specifically for this kind of story, it would be hard to avoid some level of hallucinations, and the time spent just checking for errors might be more than just writing the article from scratch.
Even with 5.1, it likes to make shit up. I refuse to call it hallucination because that implies this is beyond it's control, when the very basis of the damn thing was to lie to us directly and then just say "Oops" when called on it.
 
To be fair, this kind of shit is something AI could reasonably do. However, at least presently, unless you had one specifically for this kind of story, it would be hard to avoid some level of hallucinations, and the time spent just checking for errors might be more than just writing the article from scratch.

I don't know man, the current generation of AI has it's uses but even I'll admit (and I think I have a track record of sorts at this point re: talking about AI here very positively) that it's very difficult to make current LLMs do anything *reliably* really. It's like they see the tasks you give them and they try to actively find ways around them if they really, really want to do something else. Then just completely random left field shit too, like you task an LLM to play a simple game and it'll do 100 turns well (amazingly so) and then at the 101. turn the API suddenly gets a ton of invalid requests and you figure out the LLM suddenly and out of nowhere has hallucinated it's a guy called "Dave" who tries to use the API to write emails to "support" because apparently, his mouse broke while playing his game because he accidentally poured <some magical potion that only exists inside the game> over it and then decided to describe how he goes for a walk in the park. I had the worst laughing fits over this stuff, like the one time where I tried to simulate the most innocent and simple home assistant with older GPT and it claimed out of left field it can see me and when pressed, described to me how it's apparently a physical person that sits in and watches me through the A/C vents. There are solutions for everything but it's like herding cats. You close one hole in your pipeline, and somewhere else a leak starts. It's actually really fascinating in a way, if also annoying. You do get better at it and know where LLMs naturally tend to sperg out and how to avoid it, you also learn their subtle language and how they use and interpret words somewhat different than us, but each LLM is a bit different and with the current evolution of them, it's hard to lock in anywhere.

Then you start breaking the problems you try to solve with the black box that is an LLM down and you often come to the conclusion that a lot of problems are basically just comparing one number to another number, especially in simulated environment. Computers are pretty good at that already, don't need LLMs for that.

Classification and NLP though, it's amazing. The holy grail of computing I've been chasing since writing my first BASIC program and it comes completely natural to them, intent, subtext, psychology, so unlike everything we ever imagined AI to be. So you keep at it and do some cool stuff with them that was completely unthinkable just five years ago. It's fun.

The kinks will be worked out, people will get better at handling AI and the internet will die. See amazon listings like this one. Everything's fake, the person, the books, the reviews of the books. An AI ghost. This is what I meant when I said it'll shatter our common reality.
 
Looks like even hackers are getting replaced.


PDF: https://assets.anthropic.com/m/ec21...-AI-orchestrated-cyber-espionage-campaign.pdf

Anthropic
@AnthropicAI

We disrupted a highly sophisticated AI-led espionage campaign.The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.

12:13 PM · Nov 13, 2025
 
Even with 5.1, it likes to make shit up. I refuse to call it hallucination because that implies this is beyond it's control, when the very basis of the damn thing was to lie to us directly and then just say "Oops" when called on it.
I think you're ascribing too much agency to the chatbot. It is after all just a fancy autocomplete, not a thinking being actually grounded in reality. There's no reason we should expect statistically likely text to be truthful, no matter how large the corpus it's drawing from. And even when it admits a mistake, that doesn't mean it "knew" what you told it already, that's just its way of incorporating your feedback. You can gaslight these things quite easily and argue them into false positions.

That said, I've never been keen on the term "hallucination." A lot of what's called "hallucination" is really more like confabulation. And there are plenty of times where hallucination/making stuff up is what you want, such as if you're getting the bot to write fiction.
 
Last edited:
One of AI's biggest problems (that we've all pointed out) is how human-like it sounds. It worms into the head of a room temperature IQ middle manager like nothing else. There are so many conversations that go something like,

Here is output from AI you asked for
>Great! How are we going to make it do the whole project by itself?
You don't really. Because AI is stupid. And not a person.
>Ohhhhh I see. Got it. So how do we make it not stupid, and a person?
You don't. Because it's stupid. And not a person. The best you're getting is better output from better prompts.
>Aha! So with the right prompt it'll do projects all by itself!
No. It won't. Because it's stupid. And not a person.
>Right right right. So anyway, this ChatGPT person, when can we hire them?
 
I noticed robots have slightly upgraded from "I'm 80 years old and I really gotta poop" to "Yeah I had a few, what of it?" walk.



There are discussions whether this is fake or real, you be the judge. Make sure to watch to the end with sound on for maximum dread.
 
Two years ago I was naive to think that companies wouldn't rely on it because they need competent people. I was beyond wrong and now everything is literally becoming more and more unusable. It's not even that you can get away from it because it's being used for everything it shouldn't be. There's probably going to be less of a bubble pop and more of everything gradually crashing and burning because companies wanted to take more and more AI shortcuts.
 
Its absolutely terribad on the education sector. Too many kids and teachers both use it excessively to the point they can't form their own thoughts anymore. Kids use it to cheat, teachers use it to make their lesson plans and people in college use it extensively just to beat that assignment or test guaranteeing said graduate be as good as a DEI hire.

The tragedy is, this tech in the right hands can do wonderful things. Problem is society is engineered to be retards hence the stupidity. Our future is looking like a major clusterfuck.

I noticed robots have slightly upgraded from "I'm 80 years old and I really gotta poop" to "Yeah I had a few, what of it?" walk.

Unitreeg1.mp4

There are discussions whether this is fake or real, you be the judge. Make sure to watch to the end with sound on for maximum dread.
ubtechresponse.mp4
...The march at the end can easily be remixed with Hellmarch.


Also... Will Stancil is gonna have a really sore ass with all these robots coming for him. On that note, I fully expect clankers to start adopting these bots given the chance. Also, once they weaponize these bots (you know they will), I fully expect Command and Conquer players being in demand once more.

Two years ago I was naive to think that companies wouldn't rely on it because they need competent people. I was beyond wrong and now everything is literally becoming more and more unusable. It's not even that you can get away from it because it's being used for everything it shouldn't be. There's probably going to be less of a bubble pop and more of everything gradually crashing and burning because companies wanted to take more and more AI shortcuts.
Deus Ex aged very well as the game unintentionally predicted the degradation of technology. Its prequel. Human Revolution showcased tech being high-tech and fancy... unlike Deus Ex which was made in the 90s and as such had 90s tech being the latest which is old and outdated by today's standard.

Its amazing how quickly and prominently tech is falling apart thanks to simultaneous Jeetery, AI everything and their obsession with green line go up. Even if it means crashing and burning all for the shareholders.

Silver lining, Windows 95 might just become the gold standard once more! Hold onto older tech, and old programs. They'll play a bigger role in the future as everything falls.
 
I think you're ascribing too much agency to the chatbot.

I gave CGPT 5.1 a trick I built ages ago, and it took like 30 seconds to describe it, and unpack the entire bit of it. Correctly.
Claude got it fucking wrong in 10 seconds. 3 times.

Still better than Jeets, but our world is going to hell in a clankbasket.
 
Also, once they weaponize these bots (you know they will),
I don't know, they consist of like 50% batteries which are very explosive. I think the combat bots won't look as humanoid but there will be certainly more than drones because while drones have proven in Ukraine that they have changed modern war, you can't really hold down areas with them can you? But that's still strictly the realm of SciFi.

Command and Conquer


Funny anecdote I will tell even though it's off-topic: In some regions of the world the violence of soldiers shooting at each other was deemed too real so at least the first game (IIRC, I don't remember if later games did this, I didn't play the localized versions) was rewritten to color the blood of the soldiers black to imply oil, change their screams etc. to mechanical sounds and all the cutscenes and game texts were rewritten to make it seem as all the soldiers in combat were androids, so the game wouldn't get banned from sale. It was very bizarre. I don't know if they still do this to games. Maybe they do.
 
1763235982284.png

Apparently they’re planning on bringing back Vine under the name DiVine. They currently have a waitlist open. However, AI videos uploaded to the app will be flagged.


1763236349777.png

I mean, we have TikTok now, so I’m guessing they’re heavily relying on the nostalgia factor, and the anti AI position to get people to join in on it. As the technology advances, idk how they'll manage to get everything apart from the obvious AI animals doing wacky things type videos.
 
Last edited:
I think I finally zeroed in on what really bugs me about the overuse of AI after hearing some room temperature IQ dimwit talk about "using AI to help me graduate college courses". Let's pretend for a moment that AI is actually intelligent in the way these people think it is. Soooo, you got a tutor to do all your homework for you? And you think that fulfills the purpose of an education? Getting someone else to sit in for you on a test so you can technically "pass" while not actually absorbing any information is *such* a third world, Pajeet-Chinesium slop mindset. That's what it's enabling. Turning education into a "how do I win" game instead of striving to learn something.
 
Call me a boomer but I'm having a lot more difficulty telling what's AI and what isn't. This is becoming true when it comes to some short form youtube videos that could be real (such as dogs overreacting when life-like dog head cakes get cut open.) I usually have to watch them twice to notice the glitches in between movements. Fortunately it's only for really short videos.

But most of the time when I can't tell, it's with articles that are very generic sounding already, like "why the newest video game supports white supremacy" or something like that. I wouldn't be surprised if all articles like that had been replaced, let alone actual news. Obviously when it comes to actual news it'll result in a clusterfuck of problems but not unique to everything else that's going to fall and collapse.
I think I finally zeroed in on what really bugs me about the overuse of AI after hearing some room temperature IQ dimwit talk about "using AI to help me graduate college courses". Let's pretend for a moment that AI is actually intelligent in the way these people think it is. Soooo, you got a tutor to do all your homework for you? And you think that fulfills the purpose of an education? Getting someone else to sit in for you on a test so you can technically "pass" while not actually absorbing any information is *such* a third world, Pajeet-Chinesium slop mindset. That's what it's enabling. Turning education into a "how do I win" game instead of striving to learn something.
To be fair, education had already been a "how do I win" game, it's just that AI had accelerated the problem. I don't pay total close attention to the affirmative action discourse, but apparently universities had been admitting students based on their traumatic experiences? Usually involving racism, which is messed up.


Degrees are going to become more and more useless. In the most hopeful situation it means that degrees won't be needed for jobs and instead there will be a different competency tests to see if people are capable for jobs. In the bleakest situation it means degrees will still be needed, and students who graduated by churning in ai generated essays will get jobs they aren't qualified for, but they know how to use ai so they'll be able to keep doing it. Maybe a little bit of both.
 
Back
Top Bottom