To be fair, this kind of shit is something AI could reasonably do. However, at least presently, unless you had one specifically for this kind of story, it would be hard to avoid some level of hallucinations, and the time spent just checking for errors might be more than just writing the article from scratch.
I don't know man, the current generation of AI has it's uses but even I'll admit (and I think I have a track record of sorts at this point re: talking about AI here very positively) that it's very difficult to make current LLMs do anything *reliably* really. It's like they see the tasks you give them and they try to actively find ways around them if they really, really want to do something else. Then just completely random left field shit too, like you task an LLM to play a simple game and it'll do 100 turns well (amazingly so) and then at the 101. turn the API suddenly gets a ton of invalid requests and you figure out the LLM suddenly and out of nowhere has hallucinated it's a guy called "Dave" who tries to use the API to write emails to "support" because apparently, his mouse broke while playing his game because he accidentally poured <some magical potion that only exists inside the game> over it and then decided to describe how he goes for a walk in the park. I had the worst laughing fits over this stuff, like the one time where I tried to simulate the most innocent and simple home assistant with older GPT and it claimed out of left field it can see me and when pressed, described to me how it's apparently a physical person that sits in and watches me through the A/C vents. There are solutions for everything but it's like herding cats. You close one hole in your pipeline, and somewhere else a leak starts. It's actually really fascinating in a way, if also annoying. You do get better at it and know where LLMs naturally tend to sperg out and how to avoid it, you also learn their subtle language and how they use and interpret words somewhat different than us, but each LLM is a bit different and with the current evolution of them, it's hard to lock in anywhere.
Then you start breaking the problems you try to solve with the black box that is an LLM down and you often come to the conclusion that a lot of problems are basically just comparing one number to another number, especially in simulated environment. Computers are pretty good at that already, don't need LLMs for that.
Classification and NLP though, it's amazing. The holy grail of computing I've been chasing since writing my first BASIC program and it comes completely natural to them, intent, subtext, psychology, so unlike everything we ever imagined AI to be. So you keep at it and do some cool stuff with them that was completely unthinkable just five years ago. It's fun.
The kinks will be worked out, people will get better at handling AI and the internet will die. See amazon listings like
this one. Everything's fake, the person, the books, the reviews of the books. An AI ghost. This is what I meant when I said it'll shatter our common reality.