AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
AI also costs more per use than your average search.
2:46:44 of this video:


The TL;DR is that all the often-cited info about energy-use-per-query is based on faulty information and flawed assumptions, and recent optimized models probably use the same amount of energy as a Google search.
 
"We're only caring about your art now because it's the Current Thing™ to glaze bad art. This totally does not give artists higher expectations for lower quality art and make them believe they're beyond others because their scribbles are more 'authentic' than something generated by a meanie AI. Why are you comparing what I'm doing to a parent calling their child's scribbles a masterpiece?"

1777399290746.png
 
"We're only caring about your art now because it's the Current Thing™ to glaze bad art. This totally does not give artists higher expectations for lower quality art and make them believe they're beyond others because their scribbles are more 'authentic' than something generated by a meanie AI. Why are you comparing what I'm doing to a parent calling their child's scribbles a masterpiece?"

View attachment 8926682
"Your shit art is good now because our standards are beyond dogshit and we let computers live rent free" is NOT the own you think this is.
 
"We're only caring about your art now because it's the Current Thing™ to glaze bad art. This totally does not give artists higher expectations for lower quality art and make them believe they're beyond others because their scribbles are more 'authentic' than something generated by a meanie AI. Why are you comparing what I'm doing to a parent calling their child's scribbles a masterpiece?"
To add, AI slop has a worse rap because the vast majority of people are not creative. Just like how a lot of art is mediocre, AI art is even more so because it eliminates the time sink filter. To add to that, a lot of people also are not good at writing and translating their ideas into a succinct way for a model to understand, let alone the learning the technical knowhow for complex workflows. You still need a talented person behind the tool.

Anyways, digital art is true slop compared to traditional media. I appreciate painting, sculptures, and installation art a lot more after playing around with AI models. It certainly takes much more skill to render a detailed graphic sketch then something on your computer. The undo button doesn't exist in real life.
 
A Conversation
By Absurdist Laughter
Me: Has the United state's oil reserves been replenished from Joe Biden to Sitting President Donald Trump so far?

ChatGPT: Let’s unpack this carefully. There seems to be a mix-up in the timeline you mentioned, so I’ll clarify first and then answer the question precisely.

Step 1: Timeline of Presidencies
Donald Trump: January 20, 2017 – January 20, 2021
Joe Biden: January 20, 2021 – Present (2026)

So oil reserve policies “from Biden to Trump” are chronologically backward. If you mean from Trump’s term through Biden’s term to now, I’ll answer in that context.

Me: Are you sure about that?

ChatGPT: Yes — but let’s go carefully, step by step, to make sure there’s no confusion. I’ll focus on the U.S. Strategic Petroleum Reserve (SPR) because that’s what “oil reserves” usually refers to.
...

>radiates so many fucking hitler particles
 
Edit: I've got more info. Apparently it's not just openai that got the protest. No words about Google though.
I'm catching up and necroposting, but the fact they left out Google shows how hollow the movement is.

The whole point of OpenAI was to not let a massive for-profit ad company develop AGI and Google's cavalier attitude towards AGI risk (Google has been talking about AGI since before it was cool as well), so leaving Google out because it's not within comfortable walking distance or whatever is pretty absurd.
 
"How to protect your art from being stolen by AI!"
>Dog shit art, video uploaded 2 years ago, 200 views

All this AI shit did was make nobody losers feel twice as holy about posting their crap art everywhere. "Yeah well at least it ain't AI!".. Maybe you should try it?
 
You think you know more about AI than OpenAI staff?
Have you seen how buggy the code they've released is? E. g. the obvious bugs in their CLIP tokenizer (there are public bug reports on their repos). Not to mention their "pissfilter" which looks suspiciously like someone forgot about the default avg/norm constants for imagenet and didn't adjust for their own data.

I was trying to avoid responding to your floating point sperging, but openai didn't write what's actually responsible for smashing all those floating point numbers together (the pytorch/cuda/tf/onnx-runtime/etc people did). It was pretty obvious you didn't know what the actual issue was when you had to "re-roll" to generate your earlier examples (hint: look at how floating point is stored or e. g. the implementation of qFuzzyCompare in Qt).

The problems you're talking about are widely understood by the people who it matters to, including compiler authors (that's why -ffast-math is not on by default), so when people show that it is handled correctly by running full pipelines producing identical output you should just accept that you might've misunderstood something.
 
Have you seen how buggy the code they've released is? E. g. the obvious bugs in their CLIP tokenizer (there are public bug reports on their repos). Not to mention their "pissfilter" which looks suspiciously like someone forgot about the default avg/norm constants for imagenet and didn't adjust for their own data.

I was trying to avoid responding to your floating point sperging, but openai didn't write what's actually responsible for smashing all those floating point numbers together (the pytorch/cuda/tf/onnx-runtime/etc people did). It was pretty obvious you didn't know what the actual issue was when you had to "re-roll" to generate your earlier examples (hint: look at how floating point is stored or e. g. the implementation of qFuzzyCompare in Qt).

The problems you're talking about are widely understood by the people who it matters to, including compiler authors (that's why -ffast-math is not on by default), so when people show that it is handled correctly by running full pipelines producing identical output you should just accept that you might've misunderstood something.
I linked full articles on nondeterminism. I also provided toy examples of nondeterminism for people who didn't understand how it's possible. The only things I got wrong were assuming people in an AI thread would 1) be knowledgeable about how it works, 2) want to discuss it instead of treating my post as a personal attack.
 
You'd think that new age retards out of all subcultures would know better than treating generative AI like the voice of prophecy.
 
For some reason, that way LLM writes or talks -- especially talks -- is grating to me. Not as bad as "Chris Chan" derping, but still irritating. Whether it's "Grokipedia" or "ChatGPT". And I don't like how LLMs try (and fail) to sound human. I dunno how so many like using LLMs all the time, much less that trying to do that "AI boyfriend" and "AI girlfriend" thing. Like I said, it'd be much better if LLM talked like stereotypical 20th century scifi robots. Like the whole "AFFIRMATIVE", "DOES NOT COMPUTE", etc. thing.

🤖 🤔
 
Last edited:
For some reason, that way LLM writes or talks -- especially talks -- is grating to me. Not as bad as "Chris Chan" derping, but still irritating. Whether it's "Grokipedia" or "ChatGPT". And I don't like how LLMs try (and fail) to sound human. I dunno how so many like using LLMs all the time, much less that trying to do that AI "boyfriend" and "AI girlfriend" thing. Like I said, it'd be much better if LLM talked like stereotypical 20th century scifi robots. Like the whole "AFFIRMATIVE", "DOES NOT COMPUTE", etc. thing.

🤖 🤔
Okay, but imagine if the llm talks like a valley girl brat.

Like, thats so amazing
 
All random-number generator algorithms are deterministic.
Yes even the "introduce randomness" is deterministic when you use the same seed for the process.
This is why you introduce the randomness at the seed stage, such as by using radioactive decay and then feeding what you get into Mersenne Twister or whatever they're currently using. If you need a lot of "randomness," you can't practically generate it by such means, but with a random enough series of seeds and no way of knowing what they are, it's effectively random from your perspective, but there's a reason they call them PRNGs.

A good enough PRNG is basically uncrackable without direct access to the seed data.

It seems the premier poker sites actually use quantum physics devices to generate seed data.

The general quantum method RNG is shooting a series of photons rapidly at a beam splitter and getting a 0 or 1 depending on what sensor it hits. Or shooting them at a semi-reflective surface and measuring whether a photon is reflected or goes through.

(This assumes they're telling the truth about what is going on behind the scenes and how well audited it is. I notice the url is currently gone which is perplexing. There is new ownership people are suspicious of but the explanation of how it used and probably still does work is consistent with how gambling sites handle large amounts of randomness/pseudorandomness.)
I'm lost on this argument. Yes computers are not truely random, they cannot be. When generating random numbers it is usually take constantly changing numbers like the date and time and running it through an equation to get a "random" number.
That's the cheeseball way of doing it. For something approaching true randomness, you need something like this:
images (5).jpg
This is a consumer level device using beam-splitting to generate truly random numbers. Then you can either use them directly (if it can keep up with your demand) or use it as the seed for a PRNG.
 
Last edited:
For some reason, that way LLM writes or talks -- especially talks -- is grating to me. Not as bad as "Chris Chan" derping, but still irritating. Whether it's "Grokipedia" or "ChatGPT". And I don't like how LLMs try (and fail) to sound human. I dunno how so many like using LLMs all the time, much less that trying to do that AI "boyfriend" and "AI girlfriend" thing. Like I said, it'd be much better if LLM talked like stereotypical 20th century scifi robots. Like the whole "AFFIRMATIVE", "DOES NOT COMPUTE", etc. thing.

🤖 🤔
That's one thing I like about Gemini is they introduced permanent instructions pretty early so I was able to tell it to be straightforward and professional and be extra direct and to the point if it detects I am frustrated (mainly because I start calling it slurs).
 
Back
Top Bottom