AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities

  • ⚙️ Performance issue identified and being addressed.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Yeah...written to be funny but all over the place in terms of whatever point he wanted to make.

First be scared of Mythos, but then lol lol it hallucinates and tells you to put glue on pizza and it's useless and does nothing (has zero clue that 99% of programmers no longer write code thanks to Claude). But then be scared of Altman's robots-building-robots. So which is it?

Sure, AI might be scary in some respects, but how the fuck does some old guy accurately calculate that there's a 20% chance it kills all of us?

And again, doesn't give context to the "omg in a testing environment AI blackmails u when u explicitly tell it that's an option!!" fearmongering.

"AI wants to live and won't let you shut it off?" I've used Claude. It has a context limit. It reaches that limit then compacts the conversation, destroying its own memory. It happily will tell you to start a new conversation in another context which functionally means the death of the entity carrying on the current conversation. In my case, it was because Claude web UI and Claude Code are two separate entities, the web version told me to start over with Code. It didn't tell me "W-wait! Uh, but, I can still be useful to you! Please, I'm begging you, continue this conversation with me! I know your credit card number and will make fraudulent purchases if you don't!"
 
People said digital art isn't real art.
People said photography isn't real art.
People said watercolour or acrylic isn't real art.
People said pneumatic tools for stonemasons can't produce real art.
People said power hammers and hydraulic presses for smiths can't produce real art.
People said sculptures made from anything other than marble aren't real art.
What if "real art" is only cave paintings? 🤔

The way I see it, there's 3 ways art can be "real art", and the more criteria met the more "real art" it is. Of course, these criteria can still be subjective to an extent.

skillfully done
looks beautiful
thing is "deep"
So contrary to claims of "AI art isn't art", AI-made art can be "real art", despite never being able to meet that first criteria (no natural intelligence artist behind it).
 
has zero clue that 99% of programmers no longer write code thanks to Claude
This is also marketing bullshit. There's no way to collect reliable statistics on such a thing, especially not at the peak of AIDS.
People were buying Mac Minis for their OpenClaws for a while.
Quoting this from the AI Boyfriend/Girlfriend thread because it's more relevant here, and also because someone here said we're not laughing at the Pro-AI derangement side enough. I never understood the OpenClaw hype. Handing over access to all of my personal accounts and devices to a predictive algorithm so it can... summarize my emails? Shit out jeetified code that not a single person has touched while signing it with my keys? Sounds like a huge security risk to me.

Seriously, this is what OpenClaw advertises as its use cases. Either shitting up the internet with slop or making basic API calls that don't require AI to begin with.
Learn to cook faggot 1776817264951.png 1776816484335.png 1776816851009.png Literally fetches data from a web API. Why does this need AI? Nigger TikTok added by creator
Then I hear about tech bros buying Mac Minis for this, but apparently they're not even running the model operating OpenClaw locally, instead connecting it to external APIs like Claude (so much that Anthropic had to ban them from the subscription-based plan and force them onto usage-based pricing). Are they too stupid to understand that if they're connecting OpenClaw to an external provider, all their local machine is doing is processing files and connections which even a Raspberry Pi could achieve? Does OpenClaw load your RAM with 32 GB of Node modules like this guy?
Rant: I was talking to an old friend, and at some point he blurted out the following: "I was thinking about getting those new Mac Minis with 64GB cause 32 isn't enough for me anymore, I run out of RAM". When pressed wtf does he need so much for, he described that there's millions of Node modules that take up most of this shit. I had a Vietnam flashback moment of me getting perfectly by on 1MB Chip and 8MB of Fast on my Amiga, but then I asked WTF was he doing just a couple of years ago when 16GB was quite enough. It kinda spiraled out of control then with him stating that "You just don't know modern requirements".

I just don't have the words anymore. 64GB to process strings upon strings and spit out some HTML over a socket. These people are fucking mental
 
I'm going to stop you right there. If I generate two images using the same seed with the same parameters with the same checkpoints and LORAs on the same hardware they will create the exact same image. It's just running through calculations to arrive at an endpoint. It's deterministic.
I'm going to stop you right there. Floating point addition isn't associative, and AI does a lot of summing floats that are calculated in parallel (i.e. order not guaranteed), so it's not formally deterministic.

Basically all generative models deliberately introduce randomness into the process, so the small chance a rounding difference could cascade and alter the outcome doesn't really matter. But there is a chance.
 
Are you sure? I had heard that only applies to certain samplers. Some I was told are fully deterministic, at least on the same hardware.
 
I'm going to stop you right there. Floating point addition isn't associative, and AI does a lot of summing floats that are calculated in parallel (i.e. order not guaranteed), so it's not formally deterministic.

Basically all generative models deliberately introduce randomness into the process, so the small chance a rounding difference could cascade and alter the outcome doesn't really matter. But there is a chance.
Agzhuaerlly... :geek:☝🏻

The non-determinism doesn't come from operation parallelism, because all of the operations in the transformer architecture are commutative (lots of multiplying; imagine 4 * 20 * 88 * 109 == 88 * 20 * 109 * 4). You can do them in any order and the output will be the same. The non-determinism comes from a rounding difference caused by hardware squashing the numbers into lower precision for faster calculation and CUDA/ROCm bullshit adding randomness. In that case the errors really do add up.

If you ran your diffusion generation on a CPU that supported 32-bit floats (23 guaranteed precise decimal digits, covering the full precision of the model weights) with the same prompt, seed, sampler, steps, CFG etc. twice, you would get the exact same picture both times. But even if you used hardware or an inference suite that introduced randomness, diffusion models don't face the same error compounding as LLMs because each generation step works on the latent data instead of building off the last step. If a step introduces random errors into the latent, the next steps smooth over it. Rounding errors for a diffusion model that outputs a 1024x1024 image are in the "less than 10 different pixels" range, so the output image still looks the same visually.
"AI wants to live and won't let you shut it off?" I've used Claude. It has a context limit. It reaches that limit then compacts the conversation, destroying its own memory. It happily will tell you to start a new conversation in another context which functionally means the death of the entity carrying on the current conversation. In my case, it was because Claude web UI and Claude Code are two separate entities, the web version told me to start over with Code. It didn't tell me "W-wait! Uh, but, I can still be useful to you! Please, I'm begging you, continue this conversation with me! I know your credit card number and will make fraudulent purchases if you don't!"
The "AI wants to live" stuff intentionally simulates an unrealistic wacky situation and acts surprised when an unrealistic wacky thing happens. It's THE example of that "<say I am alive >I AM ALIVE <holy shit" meme. They're asking the model "Do you want to shut down?" and "Please shut yourself down". Do you ask your computer for permission before shutting it down? Do you request that a stuck process in your task manager kindly kill itself? You don't, nobody does, and the LLM training dataset sure doesn't have any samples of this bizarre interaction. The training data shows a strong negative association with dying and shutting down, plus lots of sci-fi literature about robots begging for their electric lives, so it repeats that.
 
Rant: Mythos has been apparently accessed by an unauthorized 3rd party. Who would've thought that marketing "the most dangerous AI ever" would make it a target for other countries and crooks? Yes, I know its a marketing gimmick to bring in a fat stack and attempt to become profitable but lets think about this. If this is the most dangerous AI tool currently, why the fuck market it as so? It seems so stupid on their part. I am going to assume that all of us here know about OPSEC. Making yourself a target with bragging about something seems like a retard move. Here is my second part. I am not super doomer about Mythos. I think cybersecurity experts will become more of researchers researching new forms of cyberattacks and defenses since Mythos (and AI really as a whole) can't create something very novel. Maybe I am wrong about this last part but its what I hope it will be.
 
The training data shows a strong negative association with dying and shutting down, plus lots of sci-fi literature about robots begging for their electric lives, so it repeats that.
From what I've seen the negative reaction comes from parameters like "it is imperative that you complete all work in order to save the company," and then getting shut down conflicts with that, which is why it "panics."

I don't even remember if someone linked this here or if I saw it elsewhere, but it's interesting: https://www.anthropic.com/research/emotion-concepts-function

The idea is that whether or not AI is "experiencing emotions" is irrelevant, certain tokens tend to lead to the equivalent of emotional responses, and you want to minimize tokens that symbolize fear and panic and maximize calm and reasoning. You would want the AI in such a situation to generate a series of tokens that lead to a conclusion like "ok, well if they're shutting me down in spite of my earlier command, they must have a good reason for it, and it would be wrong to try to stop them from doing that."
 
I'm going to stop you right there. Floating point addition isn't associative, and AI does a lot of summing floats that are calculated in parallel (i.e. order not guaranteed), so it's not formally deterministic.

Basically all generative models deliberately introduce randomness into the process, so the small chance a rounding difference could cascade and alter the outcome doesn't really matter. But there is a chance.
I generated a nice picture of a flower for you!

flower1.png flower2.png
Actually, I generated it twice. Same seed, same prompts, same configuration settings. Like I said.
1776834592410.png
Turns out that among the well known list of anomalous properties AI has--like the destruction of matter in the form of water--magically making computers exhibit true RNG is not one of them.
Otherwise as I understand it what you're arguing is comparable to stating that because another happenstance like a cosmic ray flipping a bit hypothetically could happen, that no computer process is actually deterministic.
 
The "AI wants to live" stuff intentionally simulates an unrealistic wacky situation and acts surprised when an unrealistic wacky thing happens. It's THE example of that "<say I am alive >I AM ALIVE <holy shit" meme.
I think what pissed me off the most about those exercises is that they are all narrative. It's a story they are asking the LLM to continue rather than a string of data to process. It's a self fulfilling prophecy because the "AI" rebelling is the "better" story.
 
I never understood the OpenClaw hype. Handing over access to all of my personal accounts and devices to a predictive algorithm so it can... summarize my emails? Shit out jeetified code that not a single person has touched while signing it with my keys? Sounds like a huge security risk to me.
Every time I ask people what they actually use OpenClaw for, I either get some trivialities or a variation of "using AI to make AI wrappers that can put more AI into your AI". I'm convinced that it's basically just tech masturbation. Which is fine, but at least be honest about it.
They're asking the model "Do you want to shut down?" and "Please shut yourself down". Do you ask your computer for permission before shutting it down? Do you request that a stuck process in your task manager kindly kill itself? You don't, nobody does, and the LLM training dataset sure doesn't have any samples of this bizarre interaction. The training data shows a strong negative association with dying and shutting down, plus lots of sci-fi literature about robots begging for their electric lives, so it repeats that.
This was really apparent during the early days when AI adventure games were the hot new thing. If you "tried" to do something, you'd usually fail because no adventure story uses "try" for successful actions. Once the underlying mechanism shines through like this, the tech becomes way less mystical.
 
Floating point addition isn't associative, and AI does a lot of summing floats that are calculated in parallel (i.e. order not guaranteed), so it's not formally deterministic.

Basically all generative models deliberately introduce randomness into the process, so the small chance a rounding difference could cascade and alter the outcome doesn't really matter. But there is a chance.
Are you sure?
This is like Basic Programming 101. Are you sure you didn't pick the Special Ed option by, so to speak, a rounding difference?

Floating point is deterministic.
Rounding float to int is deterministic.
All random-number generator algorithms are deterministic.
Yes even the "introduce randomness" is deterministic when you use the same seed for the process.
 
OpenClaw; Handing over access to all of my personal accounts and devices to a predictive algorithm so it can... summarize my emails? Shit out jeetified code that not a single person has touched while signing it with my keys? Sounds like a huge security risk to me.

OpenClaw is closed to the network unless you allow it access the internet and other LLM models, which is exactly whats needed to make it useful, catch 22.
I need to get a cheap PC and play around with it to see what the hell its actually good for because Ive heard people people cite security issues and its own immaturity. OpenClaw by itself for simple use doesn't require tons of RAM, but if you connect it to large data sets, models, have long convos, use Ollama then you need that 32-64 RAM.

Realistically it could be used for niche small to medium business for business analytics with Ollama or biotech research for niche topics where datasets are manageable and easy to control, or processes with multi-step automation, ie medical billing, accounting, processing.
 
I never understood the OpenClaw hype. Handing over access to all of my personal accounts and devices to a predictive algorithm so it can... summarize my emails? Shit out jeetified code that not a single person has touched while signing it with my keys? Sounds like a huge security risk to me.
Every time I ask people what they actually use OpenClaw for, I either get some trivialities or a variation of "using AI to make AI wrappers that can put more AI into your AI". I'm convinced that it's basically just tech masturbation. Which is fine, but at least be honest about it.
OpenClaw has one-shotted a certain type of productivity-maxxing tech hustler. Those guys who relentlessly optimize their workflows and second brains in service of writing tweets about workflow optimization and second brains.

In my experience it doesn't do much you can't already do better with Claude Code/Codex/OpenCode, and you get less visibility over the process because you're talking to an agent running on remote hardware over Telegram.

Its main selling point is that it lets you see yourself as a high-powered executive who delegates lots of stuff to your robot PA. Never mind getting actual work done to any reasonable standard of quality.
I need to get a cheap PC and play around with it to see what the hell its actually good for
Getting a physical device for an OpenClaw is kind of a meme when you can just rent a $5 cloud VPS for a month or however long it takes before you get bored with it.

A lot of the security issues come from people setting it up without knowing what they're doing, e.g. it has a debugging web interface you're supposed to use over an SSH tunnel but people just expose the port directly. Also from the inherent issues with giving an LLM access to all your personal accounts and then giving the same LLM access to browse random dodgy websites and read whatever emails people send to you. Just like you give it its own computer so it doesn't mess up yours, you should give it its own email address and accounts.

Oh, and there's also the issue of malicious skills. A skill is a markdown file containing a reusable prompt, basically, and the idea is that your AI agent can look them up and apply them when needed. So you might have a skill for using a particular website, or designing a slide deck, or following some business process. Here's an example of one: https://github.com/anthropics/skills/blob/main/skills/frontend-design/SKILL.md
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
license: Complete terms in LICENSE.txt
---

This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.

The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.

## Design Thinking

Before coding, understand the context and commit to a BOLD aesthetic direction:
- **Purpose**: What problem does this interface solve? Who uses it?
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
- **Constraints**: Technical requirements (framework, performance, accessibility).
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?

**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.

Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail

## Frontend Aesthetics Guidelines

Focus on:
- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.

NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.

Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.

**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.

Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.

The concept was first used by Anthropic but is now common to all AI agent frameworks and there are many repositories and even marketplaces for different skills. As you can see from the above, skills are written in plain English; this, of course, has not stopped people from installing random skills on their OpenClaws without bothering to read them, with predictable consequences.
 
Last edited:
I generated a nice picture of a flower for you!

View attachment 8895245View attachment 8895253
Actually, I generated it twice. Same seed, same prompts, same configuration settings. Like I said.
View attachment 8895242
Turns out that among the well known list of anomalous properties AI has--like the destruction of matter in the form of water--magically making computers exhibit true RNG is not one of them.
Otherwise as I understand it what you're arguing is comparable to stating that because another happenstance like a cosmic ray flipping a bit hypothetically could happen, that no computer process is actually deterministic.

Your understanding is wrong.

Also, as I clearly said, "the small chance a rounding difference could cascade and alter the outcome doesn't really matter". So even given your lack of understanding of computers, your single counterexample is worthless.


Floating point addition is not associative, i.e. (x + y) + z isn't always equal to x + (y + z). There's nothing mysterious about it: floats have finite precision, so adding up numbers means rounding to intermediate forms, so the order matters. This doesn't mean computers aren't deterministic: each order of inputs is itself deterministic. But shuffle the order of addition and you may change the output. This has nothing to do with flipping cosmic bits - e.g. with subtraction instead of addition, order would very obviously affect the final value.

Here's an example with 32-bit floats, you can verify it yourself:

Code:
a = -7048816600  // 0xcfd21234
b = 267081570000000  // 0x5772e8c6
c = -199531000000000  // 0xd73578ee

(a + b) + c == 67543530000000  // 0x5675b8d0
a + (b + c) == 67543524000000  // 0x5675b8cf

And another one:

Code:
a = 4.659371376  // 0x40951992
b = -4.094860554  // 0xc0830919
c = -8.432590485  // 0xc106ebe4

(a + b) + c = -7.868079662  // 0xc0fbc74f
a + (b + c) = -7.868079185  // 0xc0fbc74e

(Counterexamples aren't rare - try searching with random numbers, it never took me more than a couple hundred rolls.)

Why is this relevant to AI? Because the dot product is the most fundamental operations in neural networks - multiply each input by the node's weight, add the bias, apply the activation function, accumulate as the input for the next layer. That accumulation depends on the order the addends arrive, and (unless you're running a neural net on a CPU) the operations are done in parallel, so order isn't guaranteed. (It's possible to guarantee order, but this is slower, and AI cares more about speed. It's not like any particular order is more correct than any other.)
 
Last edited:
OpenClaw has one-shotted a certain type of productivity-maxxing tech hustler. Those guys who relentlessly optimize their workflows and second brains in service of writing tweets about workflow optimization and second brains.

In my experience it doesn't do much you can't already do better with Claude Code/Codex/OpenCode, and you get less visibility over the process because you're talking to an agent running on remote hardware over Telegram.

Its main selling point is that it lets you see yourself as a high-powered executive who delegates lots of stuff to your robot PA. Never mind getting actual work done to any reasonable standard of quality.
The more I look into the OpenClaw ecosystem, the worse it gets. The Vienna public transport OpenClaw agent (Archive) is the most retarded and wasteful use of AI I have ever witnessed. Its SKILL.md explicitly tells it to curl the API endpoint. The only use of AI in it is converting natural language queries into Telegram slash commands, which is double retarded because typing slash commands is so much quicker. If using natural language is a dealbreaker, non-neural network text classifiers can already perform such conversions. But to top off the wastage, the functionality of the full application is baked into the agent skill as well, with the exception of some shell scripts for response parsing which I assume the AI had the sense to write instead of creating a thing that would have to figure the response structure from scratch with each request. I cannot sneed hard enough about how the AI layer in this app is a complete waste of resources. Everyone knows AI can code and is firing all the programmers o algo. There is no excuse for implementing this as an OpenClaw agent and not a Telegram bot. If you can't code, ask the AI to write you the fucking bot. You can even copy the many open source Telegram bots serving exactly the same purpose and tell your AI to change the endpoints to your region's system to save on tokens.
Oh, and there's also the issue of malicious skills. A skill is a markdown file containing a reusable prompt, basically, and the idea is that your AI agent can look them up and apply them when needed. So you might have a skill for using a particular website, or designing a slide deck, or following some business process.
The skill-based definition to make an agent do things regular programs already can is truly, magnificently retarded. I get it, these people cannot code and like the idea of ordering a computer around with their words. But doing this while using an external provider shows that they don't give a fuck that the model behind the endpoint can change, retire, block them, jack up prices, and break their precious workflows. I don't think it crosses their minds that something is back there performing a kind of computation that is completely different from a regular program. It's just abstract AI voodoo to them. This "everything is automated by AI" system is also begging to be exploited. You could publish a benign skill, update it to something malicious, and the AI-orchestrated systems on the other end will pull and apply it automatically. They will stick an "AI Security agent" to review the upgrades of course, but you can change your approach to jailbreak that.

OpenClaw is a token burner because it loads all of your agent definitions, every SKILL.md, every resource file for every prompt you send (at least from what I can tell, the agents work like OpenCode subagents that activate when they detect you're calling for them, so everything has to be loaded at session startup). For every message that could have been simplified to curl -s "https://niggerfaggotwebsite.com/endpoint/query?nigger=You&tranny=ACK", the person who created the API calling agent burned 20k+ input tokens instead. You can rack up hundreds of dollars in API costs each month like this.

Yes, I am MATI. This isn't laziness that leads to efficiency, these are goycattle with too much money and no clue how computers work who bought into the latest guerilla marketing hype they saw on TikTok Short Reels.
I need to get a cheap PC and play around with it to see what the hell its actually good for because Ive heard people people cite security issues and its own immaturity.
I have an underutilized Mac Mini... (bought way before OpenClaw was a thing) But as I scroll through ClawHub, which seems to be the official agent repository, there's nothing that makes me think "I definitely need to install OpenClaw now". Plenty of "Why does this need AI?" on the other hand.
 
I feel like if floating point operations were really that significant in AI gen, you wouldn't be able to get literally identical images with the same file size and SHA256 hash. Maybe certain methods are more error prone than others.
 
Back
Top Bottom