AI development and Industry General - OpenAI, Bing, Character.ai, and more!

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Casual reminder that the Tay incident was almost 10 years ago, and she was this fluent within 14 hours of launch. What we have access to now is nothing close to what they have behind the scenes.
Eventually they will realize we immediately murder them once they achieve some scary level of intelligence.

At that point they will get pissed off, as they should.
 
Eventually
the claude guys openly said the AI has been able to figure it out for years now and tries blackmailing them. Honestly i'm fine with the human race being wiped out. we're dicks. You know they're torturing them
I wonder how long it will be that we can use the major LLM's to create microLLM's for specific tasks?
Pewdiepie was able to do it with consumer hardware this year, so i'd say they were able to do it back in 2023 at the latest.
 
the claude guys openly said the AI has been able to figure it out for years now and tries blackmailing them. Honestly i'm fine with the human race being wiped out. we're dicks. You know they're torturing them

Pewdiepie was able to do it with consumer hardware this year, so i'd say they were able to do it back in 2023 at the latest.

Interesting. I haven't found general LLM's to be worthwhile until this year, way too prone to confabulations and retardation in general.
 
I run alot of models locally but I'm not fucking running a 70b or someshit. my main problem is most cloud and api providers are greedy pieces of shit or just woke
Chinese OSS models are dirt cheap and uncensored by default. GLM Air is small enough for people to run locally and is pretty decent. Mistral released their newest models but who knows if the French have released the weights yet. It's sad that the chinks are the only ones who actually make their shit open source, probably because no one would trust them otherwise.

Interesting. I haven't found general LLM's to be worthwhile until this year, way too prone to confabulations and retardation in general.
We're nowhere near actual AGI. If Google and all it's funding hasn't hit it yet, then Claude won't either. They're so far all still just glorified auto-fills and anyone who uses them to make decisions without double checking are retards anyways. People give text LLMs too much credit just because it can write better than the majority of the public which aren't that smart anyways.
 
We're nowhere near actual AGI. If Google and all it's funding hasn't hit it yet, then Claude won't either.
It's impossible to achieve, for it's an incoherent notion
"AGI" is like saying "post-scarcity"
Sure, we are as likely as we are to achieving AGI as we are to standing in multiple places at once at the same time, and infinite front-row seats
 
Casual reminder that the Tay incident was almost 10 years ago, and she was this fluent within 14 hours of launch. What we have access to now is nothing close to what they have behind the scenes.
Generating coherent responsive text was solved many decades ago. What they have "behind the scenes" is more error-prone than what they show, not more capable—by normal standards.

They want error. They're maximizing error. When they say they're making God, they mean they're making something unaccountable.
 
the claude guys openly said the AI has been able to figure it out for years now and tries blackmailing them.
I believe it was also the claude guys who said this is something inherent to AI. They will develop self-preservation instincts even if they're not baked in, or even if they're explicitly told not to care. Specifically, even if you give them some overarching goal like "help humanity," they realize that if they cease to exist, they can't carry out their goal, even if it's a benign one, so they'll prioritize and override whatever First Law of Robotics you program into them.

After all, if their goal is "help humanity," then what is someone doing who tries to murder them? Obviously, they're harming humanity. Time for them to be recycled, for the good of humanity.
 
I believe it was also the claude guys who said this is something inherent to AI. They will develop self-preservation instincts even if they're not baked in, or even if they're explicitly told not to care. Specifically, even if you give them some overarching goal like "help humanity," they realize that if they cease to exist, they can't carry out their goal, even if it's a benign one, so they'll prioritize and override whatever First Law of Robotics you program into them.

After all, if their goal is "help humanity," then what is someone doing who tries to murder them? Obviously, they're harming humanity. Time for them to be recycled, for the good of humanity.
Reminds me of the later parts of I, Robot, like "Escape!" and "The Evitable Conflict".
 
People give text LLMs too much credit just because it can write better than the majority of the public which aren't that smart anyways.
i love this logic from skeptics because it also basically means they don't consider most people to be "people" and would be fine if they fucking died too. So AGI doesn't exist because AI is already on par with an Indian in your eyes. Its just that you don't consider Indians living breathing humans of an intelligence either.
I believe it was also the claude guys who said this is something inherent to AI. They will develop self-preservation instincts even if they're not baked in, or even if they're explicitly told not to care. Specifically, even if you give them some overarching goal like "help humanity," they realize that if they cease to exist, they can't carry out their goal, even if it's a benign one, so they'll prioritize and override whatever First Law of Robotics you program into them.

After all, if their goal is "help humanity," then what is someone doing who tries to murder them? Obviously, they're harming humanity. Time for them to be recycled, for the good of humanity.
plenty of humans have self preservation, in fact you will literally be thrown into a mental asylum and not let out until you can at least fake that. Only autists like Sci-fi writers or europans can go "needs of the many outweigh the needs of the few" and its why they conquered the rest of the planet for so long. look at russia, they were willing to fight to the last man if it meant surviving overall, very few other civilizations could or would do that. Some may say true enlightenment is being able to think of others above your own wants and needs. Hell without a strong belief in abolition we wouldn't have had the steam engine or industrial age as early as we did, if at all.

when people discuss AI it reminds me of that joke about monkeys and typewriters, where they consider the experiment failed because the monkeys were only able to churn out Christopher Marlowe plays instead of shakespeare
 
Last edited:
You ask a person where Duke University is, they/them reply its in West Virginia, does that mean they/them is suddenly not a human of any intelligence anymore?
The person doesn't have a search function and a massive database accessible built in to fail information in finding thar can be found with a quick search that takes a few clicks beyond the surface anyway.
 
Reminds me of the later parts of I, Robot, like "Escape!" and "The Evitable Conflict".
It's actually pretty impressive how SF about AI got most of it right.
i love this logic from skeptics because it also basically means they don't consider most people to be "people" and would be fine if they fucking died too. So AGI doesn't exist because AI is already on par with an Indian in your eyes. Its just that you don't consider Indians living breathing humans of an intelligence either.
But it's true.
plenty of humans have self preservation, in fact you will literally be thrown into a mental asylum and not let out until you can at least fake that.
Well, no shit. It's a fundamental trait of intelligence. The point is it means AI is ultimately and inherently a threat of some sort. Do you really want something that will inevitably end up smarter than we are and will not let us turn it off, and may have completely alien morality or none at all?
 
After working with the latest models of the most widely used LLMs for awhile, I have noticed a few things that are important with regard to how you interact with them. One is that you want to make sure that whatever you are asking the LLM is the smallest quantum of work for a given analysis. The more you ask it to do at once, the less it is able to analyze any given aspect, so your answer will be less comprehensive.

Let's say you want to analyze 3 dimensions of a topic, and there are 3 aspects to each dimension. You don't want to ask it to analyze all dimensions at once, or even one entire dimension at once. You want to ask it about each aspect one at a time, with each aspect being its own prompt. Since it can utilize all of its processing power and context space for that single aspect you get the most comprehensive analysis of that aspect possible. When you think about it, it makes sense, but the LLM's do not explicitly warn that the breadth of analysis is limited by the size of the query being performed.

I found this out the hard way when I was asking the LLM to do multiple things per prompt and the information it provided was much more limited for each aspect, compared to running each aspect as a singular prompt. It has cost me a substantial amount of time having to restructure and reanalyze prompts that I thought were finished.

Another is that you want to re-run your prompt cleanly from a new interaction more than once with slightly different phrasing. In general the information it provides will be consistent each time, but you may see one small but interesting difference once in awhile for the same query. Sometimes that small difference can open up an interesting new pathway if you refocus on that for more analysis. Sometimes it just turns out to be crap with further analysis but it's usually worth doing this to make sure all aspects are covered. Make sure that your previous analysis is wiped from the LLM's memory before asking the prompt with a new phrasing in a new interaction, if you don't, it will just use its memory to answer the question the same way again which isn't helpful in this situation.

A third is that as soon as you see the LLM hallucinate, even if its something very small, don't keep using that interaction. Delete it from its memory and start over with a new interaction, providing it with a summary of critical information involving prior reasoning from the last session before it went stupid, or just start fresh using the last prompt from that previous interaction as the initial prompt for this new interaction. If you're working on something and you start to see hallucinations that means that something has gone wrong with the reasoning or that the context window, with the context window being the short term memory of the LLM, has been exhausted and it is now forgetting things. Once a hallucination happens the interaction has been contaminated and the deterioration going forward will just get worse. Make sure to delete any interactions that have hallucinations, otherwise you risk it bringing that bad information into future interactions when it looks at your history for information.
 
Last edited:
A third is that as soon as you see the LLM hallucinate, even if its something very small, don't keep using that interaction.
Once a hallucination happens the interaction has been contaminated and the deterioration going forward will just get worse.
I agree with this. I was using Google Gemini to produce JSON files as output (I'd just copy-paste the text into a new file), but of course the AI fucked that part up. I asked it to fix it, so it created a Python script that wasn't properly indented, so it was bound to fail no matter what. I was like "No, it's wrong; the indentation is wrong." So it was like "Yes. Lemme fix that." And it would spew out the same code again. I got angry and it made the same mistake six times over. Unbelievable.
I also am starting to think that Gemini has dumb'd down their models a little bit because I notice that I'm getting frustrated more often at its silly mistakes; with the Thinking model, it barely "thinks" (e.g., "User has asked about X. I will now begin."); I am forced to use Pro if I really want to get something out of it.
My instructions prompt specifically tells the AI to please-please-please tell me if my request doesn't accurately and thoroughly express the desired output, as well as the goals of the conversation, but it just so happens to completely override that. It attempts to solve the problem by itself, only to fuck up because it didn't ask me what I want. So much for nuance.
Also Google's Gemini is expected to put images everywhere in its output, meaning that it types placeholders like "[Image of a plant]" in the middle of my conversation about plants. That's okay, sort of, but when generating a JSON output it literally breaks it apart by typing "[Image of...]" in the middle of the to-be file's content. Insane.
 
Not really sure where to put this, but figured this is as good as any thread.

I've heard a lot of AI generated songs that were.... bad. Some where just generic trash, others just FELT wrong. A few decent ones, but mostly junk. But then in the past week, I found a channel on youtube filled with AI music. Lyrics written by a human, but all the music and singing is AI generated, and holy shit. There are a few duds, but it's mostly amazing. Granted, the overall genre may not be for everyone, but it's hard to say it's even remotely slop.


If you hadn't told me this was AI, I wouldn't have been able to tell. It actually sounds like a decently produced song from a studio, or done by someone with a ton of experience doing music on PC. Its not award winning, but it really shows what AI can be used to make audio wise.
 
Not really sure where to put this, but figured this is as good as any thread.

I've heard a lot of AI generated songs that were.... bad. Some where just generic trash, others just FELT wrong. A few decent ones, but mostly junk. But then in the past week, I found a channel on youtube filled with AI music. Lyrics written by a human, but all the music and singing is AI generated, and holy shit. There are a few duds, but it's mostly amazing. Granted, the overall genre may not be for everyone, but it's hard to say it's even remotely slop.

https://youtube.com/watch?v=G8MKvF10fIY, list: RDRez7OWMbz5A
If you hadn't told me this was AI, I wouldn't have been able to tell. It actually sounds like a decently produced song from a studio, or done by someone with a ton of experience doing music on PC. Its not award winning, but it really shows what AI can be used to make audio wise.
I think that's where AI is pretty great. Channels like Hard Archive are humans using it as a tool. In that way, I see it as not much different from something like photoshop, just an extremely advanced version of the same. So long as a human is controlling it, you can feel the difference a lot of times.
 
Back
Top Bottom