You are using an out of date browser. It may not display this or other websites correctly. You should upgrade or use an alternative browser.
AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities
ok i gotta get it off my chest, the schizo in me has been obsessing over this all day today
Species | Documenting AGI, AKA "Drew", is an AI content farm, undoubtedly ran by an ind*an or other unsavory third world ape, who shall henceforth be known as Unnamed Jeet (or simply UJ) channel channel - preservetube
UJ has been posted several times in this thread, often to be ridiculed for his doomer sci-fi vids that are presented as fact. You, like me, likely encountered him from this video that went viral:
preservetube
This video is based entirely on a fictional "what-if" paper, a literal sci-fi story about how AI could maybe take over the world, full of wild speculation and predictions that have already fallen short: https://ai-2027.com/#narrative-2026-04-30
His most recent video compares GPT-4o to the Last of Us zombie virus, which, left unchecked, will bring about AGI and human extinction or whatever. GPT-4o is currently retired and has been for several months.
I'm not gonna bother asserting the slop allegations on every single video. it is, in my opinion, self evident, and I am going to assume you the reader are able to figure that out on your own. I am however interested in this persona he's adopted to hide the fact that he is a nasty little street shitter. the first two videos use a very obvious AI voice. the third video, (2027: The Year AI Becomes Smarter Than Us), is when he debuts the "Drew" persona, a very obvious deepfake, that only appears in weird blurry clips that are no more than a few seconds long and exhibit bizzare mechanical movement that no human would ever actually do:
"Drew" appears to be a real person, linked in an instagram account in video descriptions. Out of respect to the actual person, I won't go further than that. I believe this account has been hacked and purchased. it had gone inactive in 2021, and was only briefly revived shortly before the channel started posting it in video descriptions for "proof" it's not a content farm, and has now gone dormant again for over a year.
what stands out to me are significant discrepancies regarding the real Drew, and the "Drew" persona.
Drew has brown eyes, thick neck, and a wide jaw.
"Drew" has blue/green eyes, narrow jaw, thin neck, and larger eyes even:
Whatever, maybe it's just lighting, right? That can change a lot. And these pics, if real, are 5+ years apart. Maybe he just aged. Well, here's "Drew" in his most recent video:
This nigga's eye color changed, AGAIN.
Also, draw your attention to this image again:
This is "Drew" as uploaded on 2/7/2025.
This is "Drew" as posted on "his" instagram, 2/4/25:
Personally, I believe this to be deepfaked. But what happened to this sculpted brown-eyed hunk? Did he lose 50 lbs and change his eye color in just 3 days?
As a reader of this thread, you are no doubt familiar with the severe inconsistencies in ai-generated content, and I will leave it at that. Is it normal to change your eye color 3 times in the last 6 years? Am I looking to deep into it? Has Kiwifarms dot St user Elite Kipchak finally fucking lost it? I don't care, I think I am right, and that's all I need to know that I am right.
tldr: this nigga is either ai or a shapeshifting lizard type. either way, I'm inclined to not trust him.
As a reader of this thread, you are no doubt familiar with the severe inconsistencies in ai-generated content, and I will leave it at that. Is it normal to change your eye color 3 times in the last 6 years? Am I looking to deep into it? Has Kiwifarms dot St user Elite Kipchak finally fucking lost it? I don't care, I think I am right, and that's all I need to know that I am right.
You're not. This is important to assess at this moment because this is scarier than the phenomenon of "oh look the Anti turned out to use AI all along!" He talked shit about AI only to be replaced by it, definitely not of his own volition. Surprised this hasn't been talked about at all as this "Drew" deepfake has been around since Februrary 2025 that could've been its own mystery some sloptuber or anti could have covered. Why did this happen? Was he secretly a closeted AI lover? Did Altman send Mossad agents to kill the youtuber who dared insult his LLM? Only time will tell.
I apologize for doubleposting but I must turn your attention to the Anti-AI movement's greatest unsung "hero". That being [BlackLightJack].
Now, this CoryXKenshin looking fellow (who facedoxxed himself on the fourth thumb) isn't an average anti-AI hating boomer, infact, this guy was actually pretty funny. KEYWORD BEING 'WAS'.
He got his name from shitting on hilariously bad and illegal webtoons starting with "The Fate" to which the authors of the webtoon got a clitty leak over and copyright struck it down. BLJ's voice is pretty handsome and his jokes are deadpan and bluntly amusing. Another thing about this nigga is that he's not like other artists. For one thing, although he is a furry and leans heavy toward reviewing 'furry' media, he vehemently hates associating with them. In addition, he is not afraid to shit on retarded leftist shit like tumblr art cartoons and "art fixing" (anyone who does that shit deserves to be replaced by AI) and even the Hogwarts Legacy debacle. He had potential. Jack's main claim to fame was his (former) webtoon channel: Jack vs. The World (formerly known as Jack vs. Webtoon). He'd trash on webtoons with less or more brimmy qualities than "The Fate", among them, AI webtoons which everyone in this thread can agree deserve the ridicule, brown hands typed those prompts; it is evident.
preservetube
Afterwards the more BLJ dunked on the webtoons the more depressed he became. Jack was really only doing it to appease the algorithm, and no matter what he tried he'd either get 0 traction or it would get demonitized. Not only that, his videos got repetitive as the same issues kept appearing with each shitty 'toon, such as writing that absolves the MC from any consequences, webtoon's inhumane scheduling conditions, et. al. Thus, he announced he's retiring webtoon reviews and uses that channel for manga reviews. Since I'm not a hoodweeb and read manga very rarely (I basically only read Gold/Silver era Marvel shit) I have no personal opinions on this new direction. But the point is that FOR THAT SPECIFIC TIME, when Webtoon dunking was funny, that was when I really liked Jack, and I still pop those older videos from time to time, even knowing what I know now about him.
At a certain point, his content started shifting... in a bad way. You already saw the four Anti-AI top thumbs on his main channel. But what if I told you that was only the tip of the iceberg...
Welcome to "Jack Talks Back" a self-proclaimed "slop" channel, where Jack has made SIXTY total videos dooming about AI--or fifty-nine, giving the first one was dooming about Toy Story 5. He posts this every day, since the channel's creation on the 22nd of Februrary. His rationale for making this poor man's Clownfish TV is this quote on his community post.
I'm not mad, Jack. Just dissappointed. And the only thing that will be mad is YOU by allowing AI to live rent-free in your head. No person who'd look to make money on Jewtube would subject themselves to frustration and anger. During my time in Jack's 'cord server I'd see the guy lament about his poor childhood and how he's stuck alone with no caring family and has to make ends meet relying on Jewtube Adsense and his 9-5 job. He never went to college and lives in an apartment, he also got put in jail. Even with that in mind, to pump out videos reading articles on a technology he knows he hates in order to make ends meet isn't a good way to rake in the cash. I mean, your working a 9-5 so you already hate life and you choose to fuck up your mind even more by irrational seething?
You can tell Jack is on the 9th level of AI Derangement given that he feels the need to invoke "Ai SlOp" even in situations entirely inappropriate to the subject at hand!
preservetube
Yeah because "AI Slop", the same word I'd use to call out fake animal fusions on Facebook, would be completely appropriate for AI in fucking nuclear warfare. He basically reads the article 80%, says some bullshit about "AI destroying the planet" and then concludes, "yeah this will totally happen dudes, have you seen the artificial intelligence movies bruh?" For fucks sake, for how much AI is downplayed as a "disinformation machine" it feels fucking weird for these guys to instantly start believing the AI they know is stupid the moment it starts saying something THEY don't like.
I'd list more examples of AI retardation but its getting fucking late and all of these content farm videos are basically the same and covers topics we've covered before ITT.
TL;DR, BlackLightJack is a youtuber who turned from webtoon reviewer to AI Slop doomzombie. He's almost a semi-horrorcow when you think about it. Having eternal anger over a technology never to go away... Sad, but also funny! and that is what this forum is for.
P.S. BLJ is into femboys. If you mention his kink he will ban you. Thought I should add that.
I apologize for doubleposting but I must turn your attention to the Anti-AI movement's greatest unsung "hero". That being [BlackLightJack].
Now, this CoryXKenshin looking fellow (who facedoxxed himself on the fourth thumb) isn't an average anti-AI hating boomer, infact, this guy was actually pretty funny. KEYWORD BEING 'WAS'.
I fucking hate people who are like "HAVEN'T U SEEN'T MOVIES ABOUT AI?!" I forgot to shit on Maher earlier for that too. Yes, because all fiction is perfectly predictive.
The other thing they all love to trot out is Syndrome from The Incredibles saying "if everyone's super...no one will be," and use that as a metaphor for AI art.
Syndrome was the villain. He was crazy. You weren't supposed to agree with him. If he had genuinely distributed his cool invention worldwide, it would've benefited millions. Yes there would be growing pains like any new technology, but it's a good thing if everyone's super.
Do a google search for "data race nondeterministic". Or read the article I linked in my last post. Here, I'll link it again, even quote a relevant part of the intro:
On massively multi-threaded systems, the
non-deterministic nature of how machine floating-point operations are interleaved, combined with the fact that intermediate values have to be rounded or truncated to fit in the available precision leads to non-deterministic numerical error propagation
How many times do I have to reiterate this widely known and self-evident fact of computing before you accept your own notion of "common knowledge" is incorrect?
Nigga you're trying to argue that AI can or often fucks up generating the same picture multiple times? that's just not the case.
Even 3 or 4 years ago with stable diffusion everyone was testing and calibrating it by generating the same image, they named it the Hello Asuka Troubleshoot
Here is someone explaining it just like the pdfs and articles you're linking:
Anyone can generate a 99% accurate image with the same model and prompt.
Even LLMs can give the exact same verbatim response to a prompt if the temperature settings are set for that.
Someone provides an example and you say it doesn't mean anything and that is just a one off thing
I'm "trying to argue" that LLMs and image generators aren't deterministic in the formal sense. Which isn't a debate, it's a fact. Often giving the same output doesn't make a function deterministic.
I'm "trying to argue" that LLMs and image generators aren't deterministic in the formal sense. Which isn't a debate, it's a fact. Often giving the same output doesn't make a function deterministic.
I'm lost on this argument. Yes computers are not truely random, they cannot be. When generating random numbers it is usually take constantly changing numbers like the date and time and running it through an equation to get a "random" number. Image Gen AI is doing something similar with the seed. If you have the same settings, seed, and model, you are going to get the same results unless something funky happens because computers are complicated and vary wildly in hardware and software.
Nice moving the goalpost from image generation to LLMs because you got made fun of. And now from computers being nondeterministic but only when using AI to just LLMs being nondeterministic, which is correct* but not at all what you opened with. Assumedly you read a Wikipedia article, glommed onto some of the ideas without fully thinking them through and jumped at the opportunity to smugly share your findings with us. Now you continue to reply because you have to course correct your internet argument lest you look like a fool to internet strangers who probably wouldn't even recognize you across the site.
There are also multiple ways to interpret how you should triangulate an ngon. Ultimately the computer must determine how to solve these problems using algorithms that themselves are either deterministic or have variable elements whose outcomes are determined by deeper deterministic elements. That doesn't mean there can't be variance. It just means the variance came from something that ultimately is also fully deterministic. You aren't arguing anything even remotely relevant to my original point anymore.
Nice moving the goalpost from image generation to LLMs because you got made fun of. And now from computers being nondeterministic but only when using AI to just LLMs being nondeterministic, which is correct* but not at all what you opened with. Assumedly you read a Wikipedia article, glommed onto some of the ideas without fully thinking them through and jumped at the opportunity to smugly share your findings with us. Now you continue to reply because you have to course correct your internet argument lest you look like a fool to internet strangers who probably wouldn't even recognize you across the site.
There are also multiple ways to interpret how you should triangulate an ngon. Ultimately the computer must determine how to solve these problems using algorithms that themselves are either deterministic or have variable elements whose outcomes are determined by deeper deterministic elements. That doesn't mean there can't be variance. It just means the variance came from something that ultimately is also fully deterministic. You aren't arguing anything even remotely relevant to my original point anymore.
This is like Basic Programming 101. Are you sure you didn't pick the Special Ed option by, so to speak, a rounding difference?
Floating point is deterministic.
Rounding float to int is deterministic.
All random-number generator algorithms are deterministic.
Yes even the "introduce randomness" is deterministic when you use the same seed for the process.
Where again you are wrong - notice now full determinism relies on more than just using the same seed. You clearly didn't understand the difference between floating points ops being deterministic, and parallel floating point values being accumulated in different orders producing different results.
Glad I was able to educate you to the point you understand you were wrong, even if your ego is too proud to admit this (or to stop replying). If you have a problem with me using examples of nondeterminism from other areas - I understand some people struggle with abstract thought - that's totally fine, I'm happy to work at your level. The above link is specifically on diffusion models, no LLMs or float additions, and clearly explains sources of nondeterminism beyond the RNG seed.
Alright. I'll concede and add "unlocking the secrets to true random number generation" to the list of anomalous demonic sovl-destroying properties AI exhibits in otherwise normal God-fearing computers.
Alright. I'll concede and add "unlocking the secrets to true random number generation" to the list of anomalous demonic sovl-destroying properties AI exhibits in otherwise normal God-fearing computers.
Why are you so confused by this? It's not some novel AI-specific feature. I already gave you an extremely mundane example where adding 3 numbers together can vary (when the numbers are sourced concurrently).
Grok I'm going to post a conversation I'm reading and at the end of it, I want you to answer these questions. What are these people saying and who is correct? Inco: I'm going to stop you right there. If I generate two images using the same seed with the same parameters with the same checkpoints and LORAs on the same hardware they will create the exact same image. It's just running through calculations to arrive at an endpoint. It's deterministic. Just like any other computer program. There is nothing different between that and, say, using an algorithm to generate a fractal or using a digital art program to create an image. There's no philosophy here. Just because I drew a picture doesn't mean there is literal magic inside of it--a piece of my very essence, my 'soul'. It's up to you to interpret the image and make of it what you will.Personally I think means of creation are merely a means to an end. Limitations can inspire new ideas, but that's just the ingenuity we picked up from having to survive in a world of compromises. The same way you can use AI to pump out mindless garbage, you can hire an infinite horde of sweat shop workers to pump out cheap 3D animation. Or, you could utilize these tools equally for their strengths to bring about your vision as uncompromised as possible in a world otherwise made of compromises.I don't like how AI is used either. I don't like the slop, I don't like the way it's shoved into everything. I don't like how much trust people put into it. But those are problems with the world surrounding the technology. The bottom line is that all of this technology is truly wonderous, and will have impacts reaching far into the future of Humanity. I genuinely believe that the sophistication of machine learning is a technological marvel comparable to computers themselves. Even now, there's a lot of things you can do with AI that even just 10 years ago nobody would have believed. And we are still well within the technology's infancy. I just wish people would stop blaming the technology for all of the problems, because they're shielding themselves from seeing deeper and more clearly into the root causes of their grievances.Anon: I'm going to stop you right there. Floating point addition isn't associative, and AI does a lot of summing floats that are calculated in parallel (i.e. order not guaranteed), so it's not formally deterministic.Basically all generative models deliberately introduce randomness into the process, so the small chance a rounding difference could cascade and alter the outcome doesn't really matter. But there is a chance.Inco: I generated a nice picture of a flower for you! (presents two seemingly identical generations of a flower) Actually, I generated it twice. Same seed, same prompts, same configuration settings. Like I said. (Provides visual proof) Turns out that among the well known list of anomalous properties AI has--like the destruction of matter in the form of water--magically making computers exhibit true RNG is not one of them.Otherwise as I understand it what you're arguing is comparable to stating that because another happenstance like a cosmic ray flipping a bit hypothetically could happen, that no computer process is actually deterministic.Anon:Your understanding is wrong.Also, as I clearly said, "the small chance a rounding difference could cascade and alter the outcome doesn't really matter". So even given your lack of understanding of computers, your single counterexample is worthless.https://www.sci.utah.edu/~beiwang/teaching/cs6210-fall-2016/nonassociativity.pdfFloating point addition is not associative, i.e. (x + y) + z isn't always equal to x + (y + z). There's nothing mysterious about it: floats have finite precision, so adding up numbers means rounding to intermediate forms, so the order matters. This doesn't mean computers aren't deterministic: each order of inputs is itself deterministic. But shuffle the order of addition and you may change the output. This has nothing to do with flipping cosmic bits - e.g. with subtraction instead of addition, order would very obviously affect the final value.Here's an example with 32-bit floats, you can verify it yourselfode:a = -7048816600 // 0xcfd21234b = 267081570000000 // 0x5772e8c6c = -199531000000000 // 0xd73578ee(a + b) + c == 67543530000000 // 0x5675b8d0a + (b + c) == 67543524000000 // 0x5675b8cfAnd another oneode:a = 4.659371376 // 0x40951992b = -4.094860554 // 0xc0830919c = -8.432590485 // 0xc106ebe4(a + b) + c = -7.868079662 // 0xc0fbc74fa + (b + c) = -7.868079185 // 0xc0fbc74e(Counterexamples aren't rare - try searching with random numbers, it never took me more than a couple hundred rolls.)Why is this relevant to AI? Because the dot product is the most fundamental operations in neural networks - multiply each input by the node's weight, add the bias, apply the activation function, accumulate as the input for the next layer. That accumulation depends on the order the addends arrive, and (unless you're running a neural net on a CPU) the operations are done in parallel, so order isn't guaranteed. (It's possible to guarantee order, but this is slower, and AI cares more about speed. It's not like any particular order is more correct than any other.)Inco: Want me to generate 1000 flowers? Because I will.I thought it was pretty common knowledge at this point that computers simply cannot be truly random.Anon: Do a google search for "data race nondeterministic". Or read the article I linked in my last post. Here, I'll link it again, even quote a relevant part of the intro:https://www.sci.utah.edu/~beiwang/teaching/cs6210-fall-2016/nonassociativity.pdfOn massively multi-threaded systems, thenon-deterministic nature of how machine floating-point operations are interleaved, combined with the fact that intermediate values have to be rounded or truncated to fit in the available precision leads to non-deterministic numerical error propagationHere's a beginner-friendly article on debugging data races that notes their nondeterministic behaviour:https://www.mathworks.com/products/...how-avoid-during-software-development.htmlHow many times do I have to reiterate this widely known and self-evident fact of computing before you accept your own notion of "common knowledge" is incorrect?Inco: Nice moving the goalpost from image generation to LLMs because you got made fun of. And now from computers being nondeterministic but only when using AI to just LLMs being nondeterministic, which is correct* but not at all what you opened with. Assumedly you read a Wikipedia article, glommed onto some of the ideas without fully thinking them through and jumped at the opportunity to smugly share your findings with us. Now you continue to reply because you have to course correct your internet argument lest you look like a fool to internet strangers who probably wouldn't even recognize you across the site.There are also multiple ways to interpret how you should triangulate an ngon. Ultimately the computer must determine how to solve these problems using algorithms that themselves are either deterministic or have variable elements whose outcomes are determined by deeper deterministic elements. That doesn't mean there can't be variance. It just means the variance came from something that ultimately is also fully deterministic. You aren't arguing anything even remotely relevant to my original point anymore.Anon: Projecting much? Last page you hearted this post:demicolon said:Are you sure?https://en.wikipedia.org/wiki/IEEE_754This is like Basic Programming 101. Are you sure you didn't pick the Special Ed option by, so to speak, a rounding difference?Floating point is deterministic.Rounding float to int is deterministic.All random-number generator algorithms are deterministic.Yes even the "introduce randomness" is deterministic when you use the same seed for the process.Click to expand...Where again you are wrong - notice now full determinism relies on more than just using the same seed. You clearly didn't understand the difference between floating points ops being deterministic, and parallel floating point values being accumulated in different orders producing different results.Glad I was able to educate you to the point you understand you were wrong, even if your ego is too proud to admit this (or to stop replying). If you have a problem with me using examples of nondeterminism from other areas - I understand some people struggle with abstract thought - that's totally fine, I'm happy to work at your level. The above link is specifically on diffusion models, no LLMs or float additions, and clearly explains sources of nondeterminism beyond the RNG seed.Inco: Alright. I'll concede and add "unlocking the secrets to true random number generation" to the list of anomalous demonic sovl-destroying properties AI exhibits in otherwise normal God-fearing computers.Anon: Why are you so confused by this? It's not some novel AI-specific feature. I already gave you an extremely mundane example where adding 3 numbers together can vary (when the numbers are sourced concurrently).
Inco is largely correct on the core practical point about AI image generation, while Anon is technically correct about sources of non-determinism in floating-point computations (especially on GPUs) but overstates their relevance to typical "same seed, same parameters" image outputs.What they're sayingInco's position:Image generation with tools like Stable Diffusion is fundamentally deterministic when you fix the seed, prompt, model/checkpoints, LoRAs, sampler (non-ancestral), steps, CFG, resolution, hardware, and software settings.
It's just math: starting noise (controlled by the seed via a pseudo-random generator) → deterministic diffusion/denoising process → same output every time.
This is no different from running any other program, generating a fractal, or using digital art software. No "soul" or magic is involved; it's calculations.
Means of creation (traditional or AI) are tools. Limitations can spark creativity, but the tech itself is wondrous and shouldn't be blamed for misuse (slop, over-trust, etc.). Deeper societal/root causes matter more.
He demonstrates this by generating two identical flower images with the same seed/settings and mocks the idea that AI magically introduces true randomness.
Anon's positionomputers aren't strictly deterministic in all cases. Floating-point addition is not associative ((x + y) + z ≠ x + (y + z) due to rounding/precision limits).
In neural networks (including diffusion models), core ops like dot products involve many parallel sums on GPUs. The order of additions isn't guaranteed (parallel execution, scheduling, reductions), so tiny differences can arise and (in theory) cascade.
He links a paper on non-associativity in massively multi-threaded systems and examples of data races/nondeterminism.
Even with the same seed, there's a "small chance" of differences, and Inco's flower example doesn't disprove this (it's just one run; try 1000?).
Later, he accuses Inco of misunderstanding, projects, moves goalposts, etc., and notes that full determinism requires more than just a seed (e.g., controlling order).
The conversation devolves into sniping: Inco calls it goalpost-moving and smug Wikipedia-skimming; Anon calls Inco's knowledge limited and ego-driven.Who is correct?Inco wins on the practical reality for AI image generation.
In practice, with fixed seeds and standard setups (especially non-ancestral samplers like Euler or DPM++ 2M Karras, no xformers or certain optimizations, consistent hardware/software), the same inputs produce bit-for-bit or visually identical images. This is widely confirmed in Stable Diffusion communities, docs, and tools like Automatic1111 webUI. The initial noise is pseudo-random but fully determined by the seed + RNG algorithm (which is deterministic). The diffusion steps then apply fixed math to that noise.
reddit.com
People routinely reproduce exact images across runs/sessions for iteration, upscaling, or variations. Inco's flower demo aligns with this. Claims of perfect reproducibility hold when you control the variables (and many users do).Anon is right on the narrow technical caveat—but it's mostly irrelevant here.
Floating-point non-associativity is real (IEEE 754 standard; order of operations affects rounding in parallel reductions). GPUs prioritize speed with out-of-order/parallel execution, atomics, or certain kernels (e.g., some cuDNN convolutions, reductions), which can introduce run-to-run or hardware-dependent variability in deep learning pipelines. This affects training more than inference and shows up in some ML reproducibility issues (e.g., PyTorch/TensorFlow notes on setting deterministic flags, which often slow things down).
sci.utah.edu
However:For inference in diffusion models (what Inco is doing), the effect is usually negligible or absent with proper settings. Many frameworks/libraries make the seeded process reproducible on the same hardware/setup. Differences are more common across different hardware, OS, CUDA versions, batch sizes, ancestral samplers (which inject extra noise), or unoptimized flags (xformers, cuDNN benchmarking).
The "small chance" Anon mentions rarely cascades into visible changes for image gen under fixed conditions. Inco's point about cosmic rays or true RNG is fair—practical determinism holds despite theoretical edge cases.
Anon's examples (simple 3-number sums, data races) are valid computer science but not strong proof that two flower images "should" differ meaningfully. Diffusion models are designed around seeded pseudo-randomness being the primary (and controllable) source of variation.
In short: Inco's "it's deterministic like any program" is the useful, everyday truth for users. Anon's pedantry highlights real CS subtleties in parallel numerical computing, but he inflates them into "AI isn't deterministic" while ignoring that the seed + pipeline makes outputs effectively identical in context. The philosophical bit (soul/magic vs. tools) favors Inco—creation is about human intent and interpretation, not the medium's "essence."The snark on both sides is classic internet argument: Anon starts reasonable but gets defensive; Inco gets sarcastic and concedes sarcastically at the end.Broader takeaway (Inco's stronger point): AI image gen is a powerful, neutral tool. Problems like "slop" come from incentives, laziness, or hype—not the math itself. Blaming the tech distracts from how we choose to use (or abuse) it. The floating-point details matter for researchers/engineers chasing bit-exact reproducibility, but not for artists or critics debating "soul."
Edit:
I think we can all agree, you're bunch of Fuckin' Nerds; never change.
Oh, I posted that same video to this thread at leas twice.
I think the derangement kicked in here:
In short, he is trying to break into the voice-acting industry, which i surprisingly varied but extremely competitive. There are only two ways: either struggle in the 'beginner level' where you essentially get barely-available, barely-paying jobs or get an expensive agent. Generative AI would take away beginner-level jobs that are already scarce preventing people, including BlackLightJack, from getting up from the beginner level.
The other thing they all love to trot out is Syndrome from The Incredibles saying "if everyone's super...no one will be," and use that as a metaphor for AI art.
I mean, Syndrome is funny to quote considering his logic there is basically his bruised ego going on about how he will have his fun with being the only super before making the moniker worthless by allowing everyone to become super. In this context, that would be artists saying "Art doesn't matter if all the peasants can do it too!"