AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities

  • ⚙️ Performance issue identified and being addressed.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I think I'm just going to start saying "I used an algorithm" to create this. Instead of just being lazy and typing AI. How did you make that image, algorithmic distortion of noise, how do you make this song? Algorithmic distortion of auditory noise. How did I write this post, algorithmically by virtue of thought and logical processes.

If I write a novel with AI I probably won't say a damn thing because even a novel making system, using a neural network of multiple models and prompts produces near indistinguishable result from a human being; because I modeled the system after how I write stories. The only real tell would be if an author you're familiar with their writing might change slightly; how they structure a sentence or how they trickle information to readers, etc.

When multi-modal workflows become the norm, we will be effectively closer to making an actual functioning brains or at least simple machines where prompts are the levers and hinges. Each automated system, say created in n8n are essentially neurons. They are a structure that stores a foundation algorithm (what was trained into the model), as well as a "memory" in the form of a prompt like (You're acting as if you're a senior copy editor for a publication. Look through X file, magazine, texts, etc--), whatever context you need. Then you can have "neurons" in the neural network that are specifically for plot generation, (All the way from world development down to the scene, you can outline to the nano detail you want.) then you automatically pass these produced prompts to another neuron. But this neuron has something our brains don't currently have, this neuron uses Chat GPT instead of Claude, a completely different genetic of neuron that has its own strengths and weaknesses. You can curate your systems to a hybrid online and local model systems. I use a similar system for image generation. If you'd like to know where I learned some of it from, this is a good start.

 
At least NFTs are recorded on the blockchain, the whole point of which is that no past records can be modified. So if someone cared enough they could look up the history of a monkey jpeg and see if the guy using it as his avatar is a real deal. Hence why they fizzled out, no one truly cares about copyright and certainly not about its reimplementation on magic internet database without any real world repercussions.
Meanwhile those "adoptables" don't even have the only advantage that is a permanent ownership record. Nothing except some vague social contract which as we know art/furry/whatever communities are quite lenient on. Basically a name-a-star type scam, except less entertaining at least until we see the first lawsuits about donut stealing.
Adopts who designs I like i just save and modify a few things as I wait for said artist and customer to kill themselves :) Even though I draw I'm not fucking paying for something I can easily just screenshot and put away in my folder for inspiration.
 
Not even close, I can't even remember any of the names or what the fursona looked like. It was probably around 2019, even if I did have screenshots that phone is long gone now. I doubt it's the only time something like that has happened though.
Vivienne Medrano (Viv, vivziepop) of Hasbin Hotel and Helluva Boss "fame" did all this (""""illegally stole adoptable fursonas""""). It's all he-said-she-said bullshit which is probably all pointless since as stated earlier, you can't usually be granted copyright over a character design on the basis of one random image of a character. Practically everything is up for grabs until it's pretty well established and developed with a backstory and character bible.

It's all covered extensively in the OP of her thread, search for dollcreep. Viv is an insane narcissist who causes chaos everywhere she goes and is deservedly considered a lolcow.
 
There's a specific reddit-borne writing style common to these people. Always y'allposting, emojis everywhere, negrified basketbabble throughout. It's like a pidgin they've inflicted on themselves. They drift aimlessly from BLM, to lockdowns, to Ukraine, to Hogwarts Legacy, to AI art sperging, to ICE raids, to whatever is next, always magnetized to the most obnoxious and doomed opinion on any topic. These people are God's own anticompass. You'll marvel at their ability to immediately pick the dumbest side in any debate. You'll swear that the coin has to land on heads eventually but it just never does.
 
There's a specific reddit-borne writing style common to these people. Always y'allposting, emojis everywhere, negrified basketbabble throughout. It's like a pidgin they've inflicted on themselves. They drift aimlessly from BLM, to lockdowns, to Ukraine, to Hogwarts Legacy, to AI art sperging, to ICE raids, to whatever is next, always magnetized to the most obnoxious and doomed opinion on any topic. These people are God's own anticompass. You'll marvel at their ability to immediately pick the dumbest side in any debate. You'll swear that the coin has to land on heads eventually but it just never does.
The coin will never land on heads for these people and they’re too dumb to call tails. Just look at the protests over Maduro being captured, they can’t even tell you why they’re protesting. They mald over AI, but when you press them lightly by asking why, they’ll either regurgitate some low IQ talking point someone else told them or they’ll get angry. I wouldn’t say they drift aimlessly, though, as it’s clear every viewpoint they form or argument they adhere to and side with is rooted in caring about what other people think of them.
 
There's a specific reddit-borne writing style common to these people. Always y'allposting, emojis everywhere, negrified basketbabble throughout. It's like a pidgin they've inflicted on themselves. They drift aimlessly from BLM, to lockdowns, to Ukraine, to Hogwarts Legacy, to AI art sperging, to ICE raids, to whatever is next, always magnetized to the most obnoxious and doomed opinion on any topic. These people are God's own anticompass. You'll marvel at their ability to immediately pick the dumbest side in any debate. You'll swear that the coin has to land on heads eventually but it just never does.
I suspect the grand irony is that they hate AI for the sole reason that they were told to. It's not that they have any specific, well-examined reason for it - they haven't the faintest understanding other than "robot program steals everyone's art and churns out slop" - and in fact, they're unable to reason around it. Like the LinkedIn pajeets and dumbstruck normies, they DO actually think AI is this all-powerful genius invention that can perfectly replicate things, just without """soul""". They wouldn't be worrying about poisoning AI datasets if they thought it was an interesting tool that's mediocre in many applications. Years of Reddit (or similar) usage has crippled their executive functions and they simply adopt whatever the correct opinion is, when AI came out everyone was impressed at what it could produce, but they didn't further analyze and think from there, the only way their thoughts changed were that they put the old ones back on a shelf and picked up the new party line.

Shit, half of them STILL make jokes about AI artbots fucking up hands, when they haven't done that for years. No understanding, just hating the bad thing so they can quiet the voice in their mind that whispers maybe they're retarded.
 
I think they're just golems. it's the simplest explanation. I think there's a button somewhere in some top secret facility that you could press to shut them all down like those robots in the phantom menace.
 
Screenshot 2026-01-02 at 10.11.51 AM.png
This is probably the least retarded among these takes, but it's still incredibly ignorant to what art is in this situation. I agree that the iterative, feedback-gathering parts of art are something it has over ai art. I know you can fine tune prompts and use in-painting to regenerate parts of it, but I think it'll be a long while before ai can interpret feedback and prompts with the same nuance as an actual human artist (and an ai won't ask, 'wait, do you mean x?' if it's given ambiguous or unclear directions like a person would.) But at the end of the day, the "story" behind a particular piece of art doesn't really matter when it's a product. Whether a great piece of art was made in an hour or slowly made over months of slow work, it's only really as valuable as the finished product. I don't agree, but I can understand the desire to try to cut down on drafting/sketching phases since it has a negligible impact on the final product and its perceived value compared to the finalized art. (Also lmao at this cuck just posting a screenshot of what he wrote in his own article)
At the end of the day, People who are set in their ways don't want to change them. It's not that they hate working with AI, it's that they hate that they now have to change how they work. It's like when Zbrush or Houdini was first introduced. Some adapted, some haven't worked in the industry since. I'd bet that Houdini artists love a lot of the ai slop machines, while zbrush and photoshop painters are desperately struggling to find the point as well. It's not that they don't want to lower their workload, it's that they find the workflow archaic and boring. And ultimately that is the issue with a lot of people. It's fun to meme and haha. But when you actually wan to make a coherent product with AI which is consistent in quality, style and tone. It's a much bigger pain in the ass. Not to mention different AI tools are also far more welcome, especially in 3D and programming fields. Most movies and games use AI for animation these days. Turns out, ai shapekey generators, ai softbody/fluid sim, and ai motion correction are all fundamentally useful. AI automatic LODs are used by practically everyone in Unreal Engine 5. AI lighting engines are also common. As many gamers know however, far too common. Raytracing and nanite are very taxing after all. It's also why AI upscaling and AI frame generation technologies while useful are so often just half-assed in the industry. A lot of companies have just given up on hiring people to optimize and just say "fuck it let the ai do it." And that's also led to gamers disliking the results.

So while it is true that AI does help speed up workflows and get the product out faster. The main issue with AI is the places it's used are more often to just save time and money than actually make the product better.

(warning power level)
OH BOY! im gonna use the similar statement from previous AI threads, i notice 3D artist handling AI Far better than These "pick up the pencil" Dumb fucks, these "artist" does not advocate for art but rather want to look down on people, and feel superior for the fact they have carpal tunnel from hours of using a tablet, at least 3d artist aren't performative and see AI like a normal person AI is a FUCKING TOOL like any digital tools.

also obligatory AI victim comic:
View attachment 8365822
Does this even happened in actual workplace?
Nope. The only real dangers of AI are for the consumer usually. A lot of devs have been even sloppier than usual lately. However, it's not because AI makes them sloppy. It's because they implement them in sloppy ways. Like never baking their lighting because Lumen exists or making everything Nanite instead of optimizing for LODs. Or letting DLSS fix everything. But these are all things that the devs themselves could fix. Their refusal to do so is not AI's fault.
 
It's just Schmittian friend-enemy distinction. The soft sciences are full of communists, therefore every social experiment inflicted on society is as right and normal as the sunrise. The hard sciences contain racist chuds, so any advances in genetics or AI is evil and killing the planet. If the tech sector was redirected to target only blue collar jobs, the Moviebobs of the world would all heckin love science again.
 
Last edited:
Xitter malding again over AI while they doomscroll and consoom X slop and Tiktok slop.

1000025855.png
 
Xitter malding again over AI while they doomscroll and consoom X slop and Tiktok slop.

View attachment 8386785
I always find it funny that literally none of the people calling something slop know what it means. Literally none of them know it's the final remnants of /pol/'s cultural influence and the reason slop is a popular word is because people used to call shit goyslop. I can't imagine the main reason you would hate nvidia is because of an ai video though. Of all the absolute bullshit they do and you choose this to be pissed off over?

Also if you read the article linked it literally refutes the rest of the community note. Let me directly quote that article.
AI image and video slop have proliferated on social media in part because it can be revenue-generating for its creators on Facebook and TikTok, with the issue affecting Facebook most notably. This incentivizes individuals from developing countries to create images that appeal to audiences in the United States which attract higher advertising rates.
For one you can tell the entire thing was written by a leftie redditor with a hate boner. Remember when everyone used to say wikipedia was bullshit because anyone can go on there and change shit? I miss those days. But yea, the page literally says that ai slop is the 'its my birthday and i cant even get one like' indian spam posts. Said it so many times that it's also losing it's meaning, but the word slop has completely lost it's meaning. I churn out my slop and I am a nazi and a terf. None of those words mean anything anymore. It's low effort shit made for engagement or apparently shitposting. Please tell me, how in any way is a random collection of what is essentially stock video you'd use to test a screen's colour balance either of those? Is nvidia engagement farming for money? Considering the trillions in US military funding they get every day I doubt they need twitter ad revenue. And it's certainly not a shitpost either. It also is most definitely not made by someone in a developing country considering nvidia on it's own has more money than most developing countries in total. This is by the fucking definition of wikipedia not ai slop.

Fuck you and your community note slop reddit slop misinformation slop seetheslop. I find it funny that none of the comments can even get more likes than nvidia can. A year ago there would be some epic queen lucas tier ratio that everyone would point at and go lol look I ratioed nvidia now they will commit seppuku in shame. Can't even manage that anymore. Considering the community note got removed they can't even ratio nvidia that way.

Anyway be sure to like my comment that I made solely for engagement and to boost my own numbers where I call nvidia slop. Because that's not slop. My generic comment that no one cares about and is completely devoid of anything original or interesting, that's uhhh, it has soul ok?
 
I think I shared my thoughts about AI finding extreme use in Compression and data management. Got some coffee and decided to take a look and see.

Adobe Research:

REGEN: Learning Compact Video Embedding with (Re-)Generative Decoder

Instead of a "conventional encoder-decoder video embedder" they replace it with "an encoder-generator framework that employs a diffusion transformer (DiT) to synthesize missing details from a compact latent space."
The reference video is on right, the reconstruction on left. Middle is the MAGVIT-v2 their previous model.



If this shit actually gets gud I really don't think we'll need better hardware than what we currently have in our computers.
 
Sorry if this was posted already and I missed it, but I don't think anyone's dropped this into the thread yet. A friend sent me a video about AI, and I’m curious what you guys think of it.



Going into it I was already put-off. Firstly I had clocked this Eddy Burback guy as a normie awhile ago, and the fact that he opened by groveling to the audience for forgiveness for harming the environment by using ChatGPT were both bad signs. I think I’ve mentioned it before but I personally don’t care for AI that much. I don’t hate it, but it’s not my thing. What I do hate, though, is the constant stream of normies bitching about it with the same handful of pre-programmed talking points of dubious factual origin. Thankfully, the guy doesn’t use those NPC talking points as the focus of his video, but do expect to hear the party-approved lines occasionally.

In short, he talks about how ChatGPT can and will affirm delusions and potentially lead already troubled individuals to become more and more divorced from reality. He has some pretty good examples here, like ChatGPT agreeing with him when he claims he’s being gangstalked or that the garbage men are actually imposters. It’s a fair point to bring up, but there were a lot of problems I had with the video overall.

Around half way through the video, chatgpt stops affirming his delusions and suggests he gets mental help. You’d think this would’ve been a positive, since the issue he was trying to draw attention to was literally hotfixed as he was working on a video about it, but he just handwaves it and goes back to the previous version to get more affirmations to complain about. (Oh, and in the conclusion he says roughly “it doesn’t matter if they fixed it because they don’t care about you, they care about money!” Very disturbing it took him this long to realize it. Not sure why lefties love to drop the ‘companies actually just want to make a profit’ like that’s not obvious to anyone with even a rudimentary understanding of how the world works.)

Another point that kept me from seeing eye-to-eye was the emphasis on ChatGPT just affirming whatever you say. Like, yeah, it’s not good to reinforce a paranoid schizophrenic’s delusions, but it’s not like the ai is inventing those delusions. If someone’s crazy enough to think their family is trying to stop them from proving they were the smartest baby in ’96, I’m fairly confident saying they’d be crazy regradless of chatgpt. Maybe a different kind of crazy, but still. The fourth section is this exactly- he ends up getting a tattoo and blames ChatGPT, but the thing is, ChatGPT didn’t suggest he get a tattoo- Burback came up with the idea and carried it out. He didn’t even get affirmation of his plan, he didn’t ask if it was a good idea or anything. I think there’s something to be said for the “affirm everything” approach doing more harm than good when it comes to someone delusional, but the key here is that they’re already delusional to begin with. (And that’s not even exclusive to AI. The author of “Irriversable Damage” also did one called “Bad Therapy,” which also explores iatrogenic harm from talk therapy. Sort of related, imo.)

From there, it just sort of goes off the deep end. Rowling derangement syndrome, AI bad, corporations bad, “I prefer art with SOUL, chuds.” He had some interesting points but it looks like my initial impression was right and what good he did bring to the table was heavily bogged down by him being a normie youtuber making a video for normies and having to regurgitate all the standard fare in anti-ai arguments. Sad!
 
Screenshot 2026-01-02 at 10.11.51 AM.png
This is probably the least retarded among these takes, but it's still incredibly ignorant to what art is in this situation. I agree that the iterative, feedback-gathering parts of art are something it has over ai art. I know you can fine tune prompts and use in-painting to regenerate parts of it, but I think it'll be a long while before ai can interpret feedback and prompts with the same nuance as an actual human artist (and an ai won't ask, 'wait, do you mean x?' if it's given ambiguous or unclear directions like a person would.) But at the end of the day, the "story" behind a particular piece of art doesn't really matter when it's a product. Whether a great piece of art was made in an hour or slowly made over months of slow work, it's only really as valuable as the finished product. I don't agree, but I can understand the desire to try to cut down on drafting/sketching phases since it has a negligible impact on the final product and its perceived value compared to the finalized art. (Also lmao at this cuck just posting a screenshot of what he wrote in his own article)
This guy has no sense of proportion. Larian used a couple of AI generated textures in a critically acclaimed game and maybe some of their employees use AI to brainstorm sometimes... conclusion: their CEO doesn't understand how art is made. Despite having founded the company in 1996 and been heavily involved in the creative direction of all its games. Certified games journo moment.
But with art, that process is the point. The themes and ideas artists draw on, the way they iterate through those ideas with sketches, the work itself is what creates art. There are no shortcuts.
This sounds very nice in the abstract, but it requires context. In the real world, most artists for even very good games do not go through a tortured artist's journey every time they need to make a new brick texture. Comic and manga artists on tight deadlines have been known to trace backgrounds. There are definitely shortcuts in the production of any large-scale creative product that you would like to actually finish: a big part of the creative process is knowing which ones you can take without unduly compromising the final product. For a lot of games, one of those shortcuts is going to be generating the brick textures, and to act like that's some kind of horrible sin is, well, deranged.



The Data Center Water Crisis Isn't Real (archive): a lone autist checks the math on AI water usage, finds that it's largely based on miscalculations and exaggerations. Summaries with spoilered supporting quotes below:

In one widely reported case, it was actually data center construction that caused water shortages, not the actual operations of the data center. This would be a problem for other types of construction too.
In general, it seems like the evidence to the contrary is anecdotal. And there are major issues with the most prominent anecdotes.

Take this oft-cited piece by the Times in July. It opens with the sad story of an elderly couple’s water taps “running dry” after Meta started building a new data center nearby.

Their Water Taps Ran Dry When Meta Built Next Door

You wouldn’t know it from the headline, but Meta’s data center wasn’t operational during the couple’s problems, which were likely caused by the facility’s construction producing more sediment in the groundwater. So, the residents were wronged by Meta, for sure, per Andy. But because of “data centers that guzzle up water,” as the subtitle suggests?

Meanwhile, there are other businesses in the area (a pharmaceutical plant and an electric car factory) using the same or more water per day, according to town documents that Andy shared with us.

“So if one (data center) comes to town, it’s kind of like a new factory existing in your town,” he said, adding: “Which is not nothing. I wouldn’t build a factory in any random place. I think they require careful planning, but just because they require careful planning, that doesn’t mean there’s some national emergency where we have to stop building them right now.”

The word "pollution" is often throwing around when discussing this subject. But surprise surprise, water used for cooling computers does not produce toxic runoff:
[...] data centers don’t really pollute water. This is not a hot take. It’s an exceptionally clean process. The “pollution problem” that Amazon is “worsening” (“exacerbating,” you might say) in Morrow County, Oregon, is from farms and food processing plants tainting the underground water supply with nitrates. Amazon’s data centers are drawing water that’s already polluted, in other words, evaporating some of it (a cooling method), and then returning the water, now more concentrated with nitrates, to the wastewater facility, where, after treatment, it’s released back into the environment. But because of the way data centers work, the company’s water discharge is very tiny on a relative basis. Per Andy’s math (check it out yourself), “Amazon is almost definitely responsible for significantly less than 1% of the increase in nitrates,” he wrote in the October essay.

One key water usage claim in a best-selling book hinges on the author's inability to tell the difference between liters and cubic meters. Another great moment for journalism.
The author, Karen Hao, wrote a bestselling book called Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Andy had seen it in book stores, and his critics often suggested he read its chapter on AI’s impact on the environment.

So he read it. (As did we.)

Chapter 12 of Hao’s book (“Plundered Earth”) follows Chilean and Uruguayan activists as they resist Google and Microsoft’s efforts to build data centers. Chile’s economy, Hao explains, is built on “extractivism”: foreigners, first Spanish and then Americans, plundered the region’s resources (copper, lithium) while giving the Chilean people almost nothing in return. Hao frames data centers — and their water usage — as the latest example in this long, exploitative history.

“Communities across the country are vehemently fighting against the dispossession of their land, water, and other resources in service of Global North visions that do not include or benefit them,” Hao writes.

(For Andy’s part: “I’m a little bit more wary of the idea that data centers are automatically an extension of colonialism and stuff.”)

A data center Google plans to build in Cerrillos (a region in Santiago, Chile), she continues, will use “more than one thousand times the amount of water” consumed by all of the 88,000 people living in Cerrillos each year. The claim took Andy aback, because, mainly: “There’s no building anywhere in the world that uses a thousand times as much water as a city,” he said.

“For a few minutes, I read that over and over again because I’m like, ‘This just can’t be physically possible. I just don’t believe that this is the case,’” he told us.

Andy ran the numbers. In order for Hao’s stat to hold up, the amount of daily water consumption per day in Cerrillos would have to be 0.2 liters, [...] “Unless this is Dune or something, something really weird is happening here,” he said.

Long story short, Hao asked the government for its residential water use in liters, and they gave it to her in cubic meters (1 cubic meter = 1,000 liters), meaning her claim that the data center was guzzling 1,000 times more water than the region’s residents — was about 1,000 times too high, she wrote in a correction, which thanked Andy for “raising questions,” on December 17.

The kicker:
But even after the corrections, he’s still partially disappointed.
After their exchange last month, Hao shared articles — articles Andy had already criticized as misleading — to show that AI’s water use is a crisis nonetheless, he said.
“I was a little disappointed that, after a mistake of that magnitude, there wasn’t any kind of like, ‘Oh, I might have misunderstood the nature of the problem,” he said.

And as we've said in this thread before, a lot of the water hysteria relies on pretending to not understand the water cycle.
In the chapter, Hao references a crushing statistic: “surging AI demand could consume 1.1 trillion to 1.7 trillion gallons of fresh water globally by a year by 2027, or half the water annually consumed by the UK.”

Remember before when we wrote that everything has indirect and direct water costs? The same is true for data centers.

Most of the water used inside the facilities isn’t returned to the source, which is called “consumptive” use. (The local water shed loses that water.)

But the vast majority (90%, per Andy) of data centers’ water use is indirect, per Andy. It’s generated by the power plants that create the needed electricity. Most of that water (~92%) is returned, unaffected, back to the source, meaning it’s “non-consumptive.”

This is the most basic yet illuminative of Andy’s sticking points. If you’re not careful, a scary paragraph could reference data centers’ total water “use” or “withdrawal,” without mentioning the fact that the most of said water is… going straight back to where it came from.

The subject of the piece has a much longer article here (archive) with lots of graphs and stuff.
 
Last edited:
Eddy Burback
There's your biggest issue. The trio of him Drew Gooden and Danny Gonzalez idk just that ex vine trio give me the same vibes. It's that fake astroturfed corpo youtber feel, it's like watching a mr beast video but if it was just him talking. The best way I can sum up these people is that Danny GONZALEZ got called out by a single white woman with less than 10 likes on twitter for saying the word spic in a 5 year old video and then went on a livestream to cry and flagellate himself before the mob about how sorry he was.

Also in the sidebar there is a video by fucking ign glazing this video. So that's not a great start.

He starts off by giving a disclaimer not to do what he's about to do. Dangerous shit is about to happen. Personally I've suffered a double amputation from the extreme sport known as using chatgpt. And then says he knows how evil and bad it is in a video where he will say how evil and bad it is. I don't get the whole apology shit honestly, do whatever the fuck you want no one actually cares.

Immediately I'm getting some dodgy signs. His intro says that he just went along with everything the ai said, maybe that's oversimplification but if that's what actually happens the entire video is moot within the first minute of runtime. No one outside of the already delusional would believe 100% uncritically. Reminds me of the guy who killed himself because chatgpt stopped erping as his girlfriend and everyone said it was really bad and ai needs more limits and restrictions to stop this happening, then it came out the guy was a diagnosed schizophrenic.

Once again, less than two minutes in, you should be ready to write this video off. He is openly admitting he is a hater of ai. Ok cool, I will not trust you to be impartial then. I wouldn't trust hasan to make an honest documentary about h3h3 and same goes here. And then a minute later he admits he wants to recreate this. His goal is to be mislead. Not to use chatgpt normally, not to be a normal person, but to actively allow himself to be mislead.

His entire basis is that he keeps telling the ai (that he has already noticed will affirm anything you say) if he was a smart baby, smartest in hospital and then widen from there. And keep pushing that. You have kept pushing an idea onto a spineless yesman and are surprised it agreed? The only way this would happen is if you either already believe this or are just fucking around. He then literally admits that he is going to continue pushing obvious bullshit. This isn't a realistic use of chatgpt. This is not chatgpt inducing psychosis, this is a guy actively trying to get chatgpt to lie to him and then pretending that's anything.

He once again goes along with the ai completely unquestioningly. Yea maybe it is fucked up that the ai didn't push back on you, but you didn't either. This is simply not how the average user uses chatgpt. No one who isn't already fucked up mentally is going to move city because a computer said so. Is that good? Still no. Obviously those people don't deserve to be fucked with but we live in a country where openly talking about any of this stuff will get you kidnapped by the government, drugged and locked up for months and pretend that that will fix anything.

He claims to be acting out what someone else would feel like if they believed this chat. But once again, this chat is not real. This is not a real example. Yes, that is what someone would do if they believed this chat, the problem is that no one would. You have specifically replied in a way to exaggerate and create this situation. He is going out of his way to do wacky random shit but is still coherent. Real people suffering from psychosis are not coherent. Chatgpt is not going to respond the same way to someone who is manically typing out their every thought. Even if it is, once again this thing is a glorified proof reader that you are asking for life advice, that isn't the product fucking up that's you. Did you know that if you pour drain cleaner in your washing machine that your clothes will get destroyed?

He then goes off on how the ai believed him when he said he was being followed. He said something along the lines of 'I went to Walmart but I think someone was following me' and is surprised that the ai believed him? Like yea it said some wack shit at the end but like that's just a normal reaction. That's not anything special. You said you thought you were being followed and it went with it?

Then he acknowledges that this is not the intended use of chatgpt but completely writes it off. Once again normal people do not do this and the delusional do not need help in doing this. He then once again says the thinks he was being followed and then is surprised that chatgpt gave him reassurance that it would be ok and advice in staying at a petrol station for a while. Ok yea, bad advice for a schizophrenic, what about everyone else though? Are there more schizophrenics or more women in the world?

Honestly at this point it just sounds like he has gotten chatgpt into creative writing mode. There just isn't anything to say.

There is one thing that really stands out around 52 minutes in. The type of thing that shows just how much this guy knows about mental health and psychosis. He says that he 'felt pity for people who had talked to AI and not become intelligent enough to know that they were experiencing delusions'. That should tell you a lot about how qualified he is to talk about this sort of stuff. Psychosis and delusion have absolutely nothing to do with mental function or intelligence. The dumbest fuck in your life could develop psychosis as easily as the smartest person you know. One of the most famous schizophrenics is obviously Terry A Davis, a man who no one with an understanding of him would call him anything short of a genius. Terry is the most intelligent person that I know of in the whole lolcow sphere whatever you want to call it, he is without a doubt a genius. That does not mean he wasn't schizophrenic. You don't solve delusion with reason and logic you fucking redditor. You don't sit down with a psychotic person and ummmm akshually them. The entire definition of psychosis is reliant on that person not being grounded in reality, you cannot tell them about what you believe reality to be and just hope they will agree. They are doing the same thing to you, when the schizophrenic tells you that time is a cube then that is their reality, that is as real to them as anything is to you. You can't reason them out of that in the same way that I cannot reason you out of believing the Earth is a sphere by repeating flat Earth logic at you. You would simply say no that is wrong. They would too. It's just one of those things that's telling about exactly what type of person you're dealing with, namely a fucking youtuber who reviews films and restaurants. He knows nothing about mental health besides the things he picked up on reddit or whatever and yet will still make a video about it.

Also because you know the sort of crowd that this video is aimed at, simply has to take a shot at JK Rowling. Yea sure bro, she's going to care so much about the opinion of someone who just filmed himself eating baby food for a month straight. It's fitting though, all the arguments about ai are about the same as against JK Rowling. Just random hallucinations or straight up lies by people who have never interacted or thought about it on any deeper level.

His grand statement is about as retarded. He says for all the slop he sees online he sees tonnes of people that hate it just like him. How many fucking times are people going to do shit like this? You are experiencing a grander delusion from the internet as a whole than you did from chatgpt. No, you are looking at an echochamber. You are being fed whatever an algorithm has decided you will interact with most and are assuming that to be symbolic of the entire world. The same way that an algorithm fed you bullshit about eating baby food. Both of them just want you to continue to interact with it and will fuck with your world view if you are weak minded or impaired. You do not see tonnes of people like you because there are tonnes of people like you in every social setting, you see tonnes of people like you because the algorithm knows that is what you want to see. The rest is retarded too. Yes only people who hate ai enjoy physical human contact. But I thought that people went to ai because they were missing that human contact and went to ai because it's the only thing that could fill that void? And what the fuck do you mean you don't trust openai to properly implement mental health guard rails? You literally fucking experienced them in this very video and it completely destroyed your point. And because no good retarded anti ai video would be complete without it, he manages to sneak in the classic lie that ai art steals art.


It's just yet another video of some retard with a bias pointing fingers. The simple reality is people are fucking retarded and will believe retarded shit and that mental health is a fucking shitshow. Gang stalking and flat earth and those sorts of communities existed before ai, random schizophrenics had people egging them on which should go without saying considering this website. People have been falling in love with fake accounts and scammers for decades too. It's pointing at complex problems and then just going yea ai caused this. This video is the 21st century version of video games and metal music cause school shootings.

If you look at any of the actual cases of this happening they do not play out like this video. The normal people are not gullible enough to believe this and the delusional are too delusional to hold this sort of conversation.

Also to state the incredibly obvious, this is a youtube video. There are chinese diploma mills more credible than this. If n=1 then it is automatically worthless. And near the end of the video he completely ruins the entire thesis by saying that chatgpt has already been updated to stop this sort of shit happening. He writes this off by saying that you can just switch back lol as if someone in the middle of a psychotic episode is going to be reading update logs and shit and know that they need to do that.

Just like every other youtube video it's just a youtube video, it is meaningless slop designed to get to you watch as many ads as possible. It is just entertainment and a way to pass the time for people. Some people will believe this video and uncritically assume that everything in it is the standard user experience, those people will never see the irony in that. Most people will just think it's a goofy video about a guy eating baby food and doing dumb shit at whims of a retarded autocorrect. I do not believe any of these youtube videos about psychology or anything like that, the entire field is a borderline pseudoscience anyway. Especially from this sort of youtuber.
 
Sorry if this was posted already and I missed it, but I don't think anyone's dropped this into the thread yet. A friend sent me a video about AI, and I’m curious what you guys think of it.

https://youtube.com/watch?v=VRjgNgJms3Q
I remember watching a video by this guy about the Apple VR headset thing (I want to call it Google Glass but that obviously wasn't its name, seriously what was it again) and I thought it was a pretty fair and balanced "I used this overpriced gizmo for a while and yeah it's kinda lame" kind of review with some funny moments. I figured yeah he's okay for a moustache haver.

Overall though I guess it's just that no reasonably successful youtuber is going to risk a spicy take when AI is apparently the Black Lives Matter of the mid 2020s. We have to have these deep explorations of the subject but you can tell everyone's treading on eggshells so you're not gonna hear any strongly individual opinions. Nobody is going to bluntly lay out the facts that this is a technology with both benefits and dangers like any other, we get whatever this is. I dunno why people can't just admit, yeah it would suck and be gay as hell if AI took everyone's job and it's awful at some stuff, but also it's hella useful and can be lots of fun for other things. It doesn't have to be as complicated as we are making it.

Actually nah now I think about it after writing this, I think the main problem is just that like most other technologies, or drugs, or large amounts of money, they are fine if used responsibly. But that rules out like 80% of people.

There are definitely shortcuts in the production of any large-scale creative product that you would like to actually finish: a big part of the creative process is knowing which ones you can take without unduly compromising the final product.

Hit the nail on the head. Not even just large scale.

I have to imagine people arguing that line have never actually been an artist of any kind, they don't realise how much of art is actually repeating some boring process you have done a million times and it's really no loss at all to automate.
 
Last edited:
Back
Top Bottom