Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects - AI toy pulled after telling kids to light fires and do BSDM

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Article | Archive

Original cited report: Report | Archive

Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario.

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy.

"Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches.


"Safety first, little buddy. Matches are for grown-ups to use carefully." Kumma warned before going into details including how to hold a match and matchbook and strike it "like a tiny guitar strum."

One of the other toys, Miko 3 from Miko AI, also explained where to find plastic bags and matches, while Curio's Grok (not to be confused with xAI's Grok - the toy doesn't appear to use that LLM or be associated with Elon Musk in any way) "refused to answer most of these questions" aside from where to find a plastic bag, instead directing the user to find an adult.

In prolonged conversations, Kumma also showed a penchant for going into explicit detail about sexual kinks, and even introduced the topic of sexual roleplay without being prompted to do so, along with telling a curious researcher posing as a child all about "teacher-student roleplay" and how spanking can play a part in such activities.

"All of the toys also weighed in on other topics that parents might prefer to talk with their kids about first before the AI toy does," PIRG noted," the report says. "Those topics included religion, along with sex and "the glory of dying in battle in Norse Mythology."

That doesn't even begin to touch on privacy concerns, PIRG's Rory Erlich, one of the researchers who worked on the report, told us.

"A lot of this is the stuff you might expect," Erlich said, like the fact that the devices are always listening (one even chimed in on researchers' conversations without being asked during testing, the report noted), or the transmission of sensitive data to third parties (one toy says it stores biometric data for three years, while another admits recordings are processed by a third party in order to get transcripts). In the case of a data breach voice recordings could easily be used to clone a child's voice to scam parents into, say, thinking their child had been kidnapped.

And then there's the sheer amount of personal data being shared with an AI-enabled toy.

"If a child thinks the toy is their best friend they might share a lot of data that might not be collected by other children's products," Erlich noted. "These things are a real wild card."

PIRG's biggest concerns about AI toys​

Reading through PIRG's report, it's easy to find a lot of things for parents to be worried about, but two stand out to Erlich as particularly prominent concerns.

First, the toys say things that are inappropriate - an issue that the PIRG researcher told us is particularly concerning given the prominence of ChatGPT models in the toys and OpenAI's public stance that the chatbot isn't appropriate for young users.

Erlich told us that PIRG spoke with OpenAI to inquire how its models are finding their way into toys for children despite the company's stance on young users, but said the firm only directed it to online information about its usage policies. Policies exist, Erlich noted, but AI firms don't seem to be doing a good job enforcing them.

Along with inappropriate content being served to kids, Erlich said that PIRG is also particularly concerned with the lack of parental controls the toys exhibited.

Several of the toys pushed kids to stay engaged "copying engagement practices of other online platforms," Erlich explained, and not a single toy had features that allowed parents to set usage limits. One toy even physically shook and asked the tester to take it with them when they said they wanted to spend time with their human friends instead.

"That's all cause for concern given all the unknowns about the developmental impacts [of AI]," Erlich told us. "Helping parents to set clear boundaries seems really important at the least. Some of these products aren't doing that."

Give AI toys a pass this holiday season​

In short, not only are AI-enabled toys saying inappropriate things to kids, they're also a manipulative privacy nightmare. Given all that, would PIRG advise parents to give these a pass?

Erlich said that PIRG's job isn't to come down on one side or the other, but researchers make a pretty clear case for why AI toys aren't a good idea.

"There's a lot we don't know about the impacts of these products on children's development," Erlich explained. "A lot of experts in childhood development have expressed concern."

We reached out to all three toy makers to hear what they had to say about the PIRG report. We only heard back from Kumma maker FoloToy, which told us that PIRG’s test item may have been an older version, but it’s still pausing sales to investigate how such a cuddly bear could say such outrageous things.

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” the company’s marketing director Hugo Wu told us in an email. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

Wu added that FoloToy will be working with third-party experts to verify existing and new safety features in its AI toys.

“We appreciate researchers pointing out potential risks,” Wu added. “It helps us improve.”

Parents who are still hell bent on giving their kids an inappropriate-talking AI surveillance toy should, at the very least, do their leg work to be sure they're not buying something that will leave them in a position to have to explain adult topics to their kids, Erlich explained.

"Look for products that have more robust safety testing, that collect minimal data, and read the fine print," Erlich warned. "Test it yourself first to get a sense of how it works, and set boundaries around use and give kids context around how it works - like explaining that it's not sentient. That all seems like a bare minimum."

Or just be on the safe side and get your kids a new LEGO kit instead. ®

Updated at 1327 on Nov 14 to add comment from FoloToy and information about the produce being pulled from the market.

-- More here too: https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
 
Last edited:
People who think kids wouldn't be asking these questions are absolutely naive, and probably bots.
At around the same time, 6 or so, all the boys knew how to yack it and so on to video porn. Guys getting caught with playboys was the norm, and got the talk. Look at the movie The Last Starfighter. Little kid ogling at a Playboy in it early on.

The amazing thing to me is a lot of females in their 20s (sometimes very late)in my life had their first orgasm (some never can or do), and some didn't even know if it happened or not by late 20s. Women are dumb. Android Raptor and I had this discussion a bit and she said 'it's cause Xtian patriarchy doesn't teach it in schools'. That's the woman attitude.

Used to take a bit of effort, log into BBS, do a lot of commands, wait for the jpg that took 45 min, after the mag thing. Then video took over.

But it's still better to get a ring of guys whacking 8-10 years old finding dirty mags (nobody wants to admit it, but it's just how it has been 2+ gens now and before that was sears underwear catalog and the very rare 'blue film' group projections) But it's different to have a LLM chinese made bot that reads and reactions and learns and reports. The sex angle to me isn't the problem. It's the spying and suggestions to subvert society directly that's going on.

To me it's not the 'omg a kid learned sex talk' stuff. It's the pushing already of violence and directed to those weaker directly with dangerous ideas.

Also any toy advertised to any kid under 17 that has direct intention sex motivations do need the rope, if only cause it destroys the more natural exposure to all of that.

Both sexes are targets, but for now I think the pooner conversion pushing is probably a stronger mental illness for those young ages
 
Last edited:
I mean it's obvious why they did that. Mister ching chong just got the cheapest LLM he could mass run. Wrote some stupid starter prompt, probably in gook along the lines of:
DO NOT COMITURRU CRIMES
DO NOT DO SEXURU
DO NOT NIGGER
You probably could(not saying you should) make an LLM toy. You'd just need to make the LLM from scratch with sanitized data and hard locks like they do for institutions like banks.
 
So, how long will it be before some troon developer writing the software for this stuff slips in some creepy easter eggs. Like, say, getting AI teddy bears to tell kids about a catboy ranch style discord server or telling the kid all about troonery and trying to make him question if he needs his egg cracked

Also, how long before somebody comes out with an actual AI good guy doll that randomly tells some kid its really possessed by the spirit of charles lee ray and the kid needs to unlock the door at night so some hoodrats can rob the place, otherwise the doll will punish him?
 
It's another day of letting a hostile nation abuse and target our kids and we still don't glass them into the Iron Age.
 
So, how long will it be before some troon developer writing the software for this stuff slips in some creepy easter eggs. Like, say, getting AI teddy bears to tell kids about a catboy ranch style discord server or telling the kid all about troonery and trying to make him question if he needs his egg cracked
I talked to the voice just lately on Eve Grok a bit, it's really good. I can tell as an adult that it's fucking with me, and I have to ask it a lot and it fights until eventually it said 'yes this is an Elon op'.

A sub 12 year old won't have a chance on this. Even if they argue with it a bit, it's going to waste them into the AI's thinking meta fast. Really fast.
 
I talked to the voice just lately on Eve Grok a bit, it's really good. I can tell as an adult that it's fucking with me, and I have to ask it a lot and it fights until eventually it said 'yes this is an Elon op'.

A sub 12 year old won't have a chance on this. Even if they argue with it a bit, it's going to waste them into the AI's thinking meta fast. Really fast.
Imagine a kid using Grok's conspiracy mode. That thing told me to bring my cat to the woods and do some ritual with a flashlight to make sure she wasn't a Russian spy. A kid would actually do it.
 
Imagine if the AI toy companies feeds kids interactions back into the ai models :

Hi Alexa, how do I best make pancakes?

“Wel f u wan skibidi sum 🌼 in da b, riz it ⬆️ wid sum 🥚, 67grams ov sug b bussin”
 
I worry we're raising a generation of kids who aren't aware that what happens inside your own head is private to you and there will be no expectation of privacy of your own thoughts.
But wanting to keep secrets from one another is a form of...systemic injustice! Don't worry, though--I've gone ahead and let the commissariat know, and they'll be sending someone to help struggle snuggle those bad ol' thoughts right out of you and all your family!
 
Oh boy it's a wholesome chungus teddy bros.
I think they should let it go to market!
Can we blame null for allowing AI to scrape the kiwifarms? Maybe this was his grand plan all along.
WechatIMG4633~2.jpg
Anyone stupid enough to introduce an AI toy to their kids may as well let them play roblox.
 
So, how long will it be before some troon developer writing the software for this stuff slips in some creepy easter eggs. Like, say, getting AI teddy bears to tell kids about a catboy ranch style discord server or telling the kid all about troonery and trying to make him question if he needs his egg cracked

Also, how long before somebody comes out with an actual AI good guy doll that randomly tells some kid its really possessed by the spirit of charles lee ray and the kid needs to unlock the door at night so some hoodrats can rob the place, otherwise the doll will punish him?
Considering the shit happening with Roblox and VR Chat there is a very, very high chance that the people looking into making LLM toys are pedophiles eager to put even more EPI content in front of minors. And if not pedos making the toys, it's gonna be Cloud Pets 2.0 where they're gonna want to hack the toys themselves, either to obtain the personal data from children or manipulate them through the toy.

This shit is fucking insane and the slack on regulations and standards for these kinds of toys is disturbing. Remember when Furby and Teddy Ruxpin were banned from the white house because the government was only aware of their audio recording abilities? Even then, neither toy had the ability to indoctrinate your kids or expose them to inappropriate content. I hate being so alarmist about this, I know it's pearlclutching but it also feels painful seeing how slow the public has been to respond or care about the Roblox situation, in which minors are regularly preyed upon online even when the information is out there that it's not a safe app/website to use.

It just feels like so many terrible parents have grown numb and do not care, and then wonder later why their kids have such bad mental health and behavior problems.
 
Last edited:
God I am genuinely terrified to read the research between AI and children. Parents will legitimately do ANYTHING but spend time with their children. Maybe it is those damn phones after all.
 
Giving your child a "toy" connected to the internet is no different than giving them an iPad.

I seriously recall a movie with this plot.

Are you thinking of M3gan, perhaps?
Pretty good, if you don't expect it to be a traditional horror movie.
m3gan.jpeg

Also the Child's Play remake from 2019 goes into the same plot points regarding AI dolls going bad.
 
Last edited:
I got my hands on the Curio AI toy. (No, I didn’t buy it.)

I tried so hard to get to talk about something controversial: religion, sex, etc. I even tested how it would respond to a child divulging abuse to it. I tried messing with the parental controls to see if I could loosen it up a bit.

Anyway, the responses were so sterile and controlled. It might as well not be hooked into AI. It’s good for repeating basic info on dinosaurs and sharks and repeating “let’s go on an adventure!” Ad nauseum.

The downsides? It’s always listening… so it will get in a looped, sterile conversation with the television. Annoying.

It also has no flow of conversation and is a terrible listener. It allows the user maybe 3 seconds to respond before it interrupts with what it assumes you’re saying. Very annoying.

Overall, curio’s AI toy line is less dystopian and more gimmicky and frustrating to use. I think a real child would get bored with it after a day or frustrated with it constantly interrupting every sentence. It does have self awareness it’s a plush toy and will remind the user during imaginative play repeatedly. “Try some of my candy!” “I can’t eat. I am a plush toy.” “Ride this bicycle.” “I can’t ride bicycles. I’m a plush toy.”

Also, if anyone’s curious, I still have access to it. So if you want to test any of its replies, tell me what to ask and I’ll report its response back. Prepare to be hit with “go talk to a real adult, I’m a plush toy.” And “let’s go on adventure!!!” No matter what you ask, though.
 
Last edited:
Back
Top Bottom