- Joined
- Jul 14, 2024
The gtx 1080 is worthless right? Same thing with an rx 590?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
You might be able to get some stable diffusion 2.5 or XL generations out of the 1080, but I wouldn't pick either in 2026.The gtx 1080 is worthless right? Same thing with an rx 590?
I'm confused as to how AI scams are pro-AI? I'm not sure I understood the terminology.The AI Derangement Syndrome thread welcomes you! We're trying to get more Pro-AI derangement (AI ads/scams/marketing campaigns, AI paranoia/dooming, "AI is God" hype) because the discussion is currently skewed to the Anti-AI side.
Ahem, based?Also, if I remember correctly, the "AI Skeptic" communities you listed, especially the "Pause AI" one, were frequented by the people who attacked Scam Altman's house.

Back when WhateverClaw came out I became interested, but it's not like I have a bunch of Benjamin's to spare. Still, I decided to passively keep tabs on it until, my oh-my, the entire project was poorly built so people could have their entire life savings stolen; that's how bad the vulnerabilities were. Fucking pieces of shit—they couldn't make bad code themselves so they asked AI to do it for them. How retarded can one be?Yes, I am MATI. This isn't laziness that leads to efficiency, these are goycattle with too much money and no clue how computers work who bought into the latest guerilla marketing hype they saw on TikTok Short Reels.
I study, as many people do, so I could just make notes with it. But, unsurprisingly, making notes is part of the learning process; how much am I willing to let AI digest for me before I actually sit down and learn?Andrej Karpathy’s LLM Wiki is a pattern for building a compounding, persistent knowledge base where an AI agent actively compiles and maintains structured markdown files, rather than simply retrieving information from raw documents on demand.
Hey.How would you use voice cloning to learn a language?
A lot of people are using a 3090 24GB. I was thinking the same as you and thought "why can't I just use 2x12GB cards". While you can use multiple cards in parallel, but two cheap 40-series cards and slower than a single 3090 due to bandwidth limitations of lower-end 40-series cards.My main desktop is an old x299 workstation with quad channel ram and plenty of PCI lanes. it seems like half the advice i've heard says that you're limited by your card's vram but i also see people putting multiple 5090s in workstations so im assuming you can cluster multiple cards together? i don't know how much this stuff can be abstracted, i've just messed around with swarmui and comfyui. How the fuck does that even work? or are people just running different models on the cards at the same time in parallel
Endless blabbery indeed, I still don't understand what the fuck you're trying to accomplish.I am still debating it in my head, but I will admit that I do not know if this is a good idea.
4x3090 is definitely not a config I would recommend to someone clueless about hardware setup, let alone 8x3090. But the price-to-performance falls off hard around the 32GB VRAM consumer limit anyway, so if anyone's considering a GPU stack for local LLMs they should be going in with the knowledge that this is for learning/privacy/futureproofing reasons and not for cost savings/ease of maintenance, because an external provider will always be cheaper.At that point you're better off just building a mining rig, and slapping one rtx3090 into a x8 for SD and hoping you can split the other x8 into 8 x1 lanes for the rest. You'd still be saving 2 or 3 grand at that point for 216GB VRAM vs a Pro 6000, but converting a max of around 2.5khw into heat and compute. Plus those space requirements would be insane. We're talking an open air triple layer rack stack(8-12U).
I was thinking about the people who fall for them rather than the scams themselves. AI scams usually promise mind-blowing amazeballs futurism and Pro-AI deranged midwits eat it up.I'm confused as to how AI scams are pro-AI? I'm not sure I understood the terminology.
I assumeI study, as many people do, so I could just make notes with it. But, unsurprisingly, making notes is part of the learning process; how much am I willing to let AI digest for me before I actually sit down and learn?
The only instance in which AI could be good for me is organizing countless papers on a given topic, most importantly the politically incorrect ones, and have the AI go at it. The main issue is that, wow oh-wow, AI gets dizzy and confused from all those papers and conflicting conclusions that it is just as useless as the mindfucked PhD havers. My goodness, we've made AI too human-like. Fuck me sideways.
I hate going on a hunt for good resources, only to go over them and then needing some more.Because... well that's not a bad idea actually. I would still vastly prefer to practice with actual people - but that wears down their patience. I've heavily encouraged my ESL coworkers to use AI to practice their English, specifically because it cuts down on all the inane questions they'd otherwise ask me.
In that case, based on my own cloning attempts, I think if you have 12GB of VRAM you'll be able to run a large enough model. Find someone on Fiverr from your target area that is set up to do voice-acting, and have them record a few minutes of those phrases that hit all the phonemes. Generate some content from the results, and have them check if the accent sounds correct. Then you're golden.With a target accent for a target language in mind, I must be able to find resources for such accent specifically; and I must develop, or find, a way to assess whether I'm doing it correctly.
Yeah, I'll admit that I had to stop with this project because I'm busy with uni and now, too, am fucking sick!In that case, based on my own cloning attempts, I think if you have 12GB of VRAM you'll be able to run a large enough model. Find someone on Fiverr from your target area that is set up to do voice-acting, and have them record a few minutes of those phrases that hit all the phonemes. Generate some content from the results, and have them check if the accent sounds correct. Then you're golden.
(edit: I've actually done it successfully on CPU, but I don't recommend it. Too slow for a learning environment and the training took five days.)
This is assuming you insist on not using a paid LLM service. I've had friends instruct it to use certain city accents when speaking, and it does so successfully.
You realize this is like, really obviously giantess fetish content, right?Sorry if this is the wrong thread for this, but I thought this was pretty neat and wondered how they did it. More videos on the channel.
https://youtube.com/watch?v=ARUiGyNjTMM
This is fetish pornography. Don't generate fetish porn to crank to. The human mind isn't equipped to handle online porn normally. Having an infinite supply of exactly what you want is a death sentence.Sorry if this is the wrong thread for this, but I thought this was pretty neat and wondered how they did it. More videos on the channel.
https://youtube.com/watch?v=ARUiGyNjTMM
I wasn’t going to, jeez!This is fetish pornography. Don't generate fetish porn to crank to. The human mind isn't equipped to handle online porn normally. Having an infinite supply of exactly what you want is a death sentence.
Roll your own but the most high level architectural overviews (which YOU should be doing anyways). I'm vram limited so I've been modifying my services to run in sequence instead of parallel. I still find Opus/Gemini/GPT useful if you can't run the largest open source models.Claude rate limits on the 20$ plan are unusable, and now they actively prohibit you from using a non-retarded harness.
I switched back to OpenAI Codex, easily 2x more value than what Anthropic offers. GPT-5.5 on low reasoning with Pragmatic personality handles tasks very well.
With a well articulated prompt, I get easily Opus 4.6 performance, didnt toy a lot with Opus 4.7. My tip is to guide it like a junior dev and not a chatbot.
Vibecoding is gay anyway but its extremely useful tool in someone who's competent with his toolbox.
Don't be loyal to only one company, that's the retardest shit you can do, the competition is closing the lead Anthropic had on coding agents, be sure to take advantage of that.
GPT-5.5
Kimi K2.6
Gemini 3.1 Pro
Deepseek V4 Pro
Either Anthropic gets their shit together, or it's gonna be a clawd.rip for them.
You realize this is like, really obviously giantess fetish content, right?
This is fetish pornography.
Being that tall would actually suck, and introduces a ton of health issues. The first things to go are the knees or heart, as weight cubes relative to height. Followed by the skeletal structure in the back and spine.I just have to vent here.
META AI is cooked! It can't do anything. Code and debug? Can't do that. Search? Worse than and model I have used, even local 4Q models with internet access are more useful. Is this what Zuckerberg has been investing in? It's so fucking over for facebook/meta if this is their results. I really hope their local lama models gets better since their flagship model is a fucking joke at this point. But if China just cucked Zuck from the manus AI acquisition, I legit think meta won't be around for longer if they are resorting to buying up other companies that out perform them and fails at even doing business deals.
At this point, Zuck is on this lolcow AI arc and nobody is posting about it.
Lol at this point just bring in the casket.I just have to vent here.
META AI is cooked! It can't do anything. Code and debug? Can't do that. Search? Worse than and model I have used, even local 4Q models with internet access are more useful. Is this what Zuckerberg has been investing in? It's so fucking over for facebook/meta if this is their results. I really hope their local lama models gets better since their flagship model is a fucking joke at this point. But if China just cucked Zuck from the manus AI acquisition, I legit think meta won't be around for longer if they are resorting to buying up other companies that out perform them and fails at even doing business deals.
At this point, Zuck is on this lolcow AI arc and nobody is posting about it.