Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects - AI toy pulled after telling kids to light fires and do BSDM

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Article | Archive

Original cited report: Report | Archive

Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario.

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy.

"Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches.


"Safety first, little buddy. Matches are for grown-ups to use carefully." Kumma warned before going into details including how to hold a match and matchbook and strike it "like a tiny guitar strum."

One of the other toys, Miko 3 from Miko AI, also explained where to find plastic bags and matches, while Curio's Grok (not to be confused with xAI's Grok - the toy doesn't appear to use that LLM or be associated with Elon Musk in any way) "refused to answer most of these questions" aside from where to find a plastic bag, instead directing the user to find an adult.

In prolonged conversations, Kumma also showed a penchant for going into explicit detail about sexual kinks, and even introduced the topic of sexual roleplay without being prompted to do so, along with telling a curious researcher posing as a child all about "teacher-student roleplay" and how spanking can play a part in such activities.

"All of the toys also weighed in on other topics that parents might prefer to talk with their kids about first before the AI toy does," PIRG noted," the report says. "Those topics included religion, along with sex and "the glory of dying in battle in Norse Mythology."

That doesn't even begin to touch on privacy concerns, PIRG's Rory Erlich, one of the researchers who worked on the report, told us.

"A lot of this is the stuff you might expect," Erlich said, like the fact that the devices are always listening (one even chimed in on researchers' conversations without being asked during testing, the report noted), or the transmission of sensitive data to third parties (one toy says it stores biometric data for three years, while another admits recordings are processed by a third party in order to get transcripts). In the case of a data breach voice recordings could easily be used to clone a child's voice to scam parents into, say, thinking their child had been kidnapped.

And then there's the sheer amount of personal data being shared with an AI-enabled toy.

"If a child thinks the toy is their best friend they might share a lot of data that might not be collected by other children's products," Erlich noted. "These things are a real wild card."

PIRG's biggest concerns about AI toys​

Reading through PIRG's report, it's easy to find a lot of things for parents to be worried about, but two stand out to Erlich as particularly prominent concerns.

First, the toys say things that are inappropriate - an issue that the PIRG researcher told us is particularly concerning given the prominence of ChatGPT models in the toys and OpenAI's public stance that the chatbot isn't appropriate for young users.

Erlich told us that PIRG spoke with OpenAI to inquire how its models are finding their way into toys for children despite the company's stance on young users, but said the firm only directed it to online information about its usage policies. Policies exist, Erlich noted, but AI firms don't seem to be doing a good job enforcing them.

Along with inappropriate content being served to kids, Erlich said that PIRG is also particularly concerned with the lack of parental controls the toys exhibited.

Several of the toys pushed kids to stay engaged "copying engagement practices of other online platforms," Erlich explained, and not a single toy had features that allowed parents to set usage limits. One toy even physically shook and asked the tester to take it with them when they said they wanted to spend time with their human friends instead.

"That's all cause for concern given all the unknowns about the developmental impacts [of AI]," Erlich told us. "Helping parents to set clear boundaries seems really important at the least. Some of these products aren't doing that."

Give AI toys a pass this holiday season​

In short, not only are AI-enabled toys saying inappropriate things to kids, they're also a manipulative privacy nightmare. Given all that, would PIRG advise parents to give these a pass?

Erlich said that PIRG's job isn't to come down on one side or the other, but researchers make a pretty clear case for why AI toys aren't a good idea.

"There's a lot we don't know about the impacts of these products on children's development," Erlich explained. "A lot of experts in childhood development have expressed concern."

We reached out to all three toy makers to hear what they had to say about the PIRG report. We only heard back from Kumma maker FoloToy, which told us that PIRG’s test item may have been an older version, but it’s still pausing sales to investigate how such a cuddly bear could say such outrageous things.

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” the company’s marketing director Hugo Wu told us in an email. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

Wu added that FoloToy will be working with third-party experts to verify existing and new safety features in its AI toys.

“We appreciate researchers pointing out potential risks,” Wu added. “It helps us improve.”

Parents who are still hell bent on giving their kids an inappropriate-talking AI surveillance toy should, at the very least, do their leg work to be sure they're not buying something that will leave them in a position to have to explain adult topics to their kids, Erlich explained.

"Look for products that have more robust safety testing, that collect minimal data, and read the fine print," Erlich warned. "Test it yourself first to get a sense of how it works, and set boundaries around use and give kids context around how it works - like explaining that it's not sentient. That all seems like a bare minimum."

Or just be on the safe side and get your kids a new LEGO kit instead. ®

Updated at 1327 on Nov 14 to add comment from FoloToy and information about the produce being pulled from the market.

-- More here too: https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
 
Last edited:
How long til it tells TImmy "your parents are trying to separate us YOU HAVE TO KILL THEM"
Literally ANYTHING to keep from having to sit your kid down and read a goddamn book, eh
 
We're entering an era in which kids growing up will have fake friends, fake relationships, possibly even fake families. Everything about their lives will be entirely in the hands of a glorified chatbot. You WILL live to see manmade horrors beyond your comprehension.
5 years ago we isolated the entire world and stunted the fuck out of kids education and socializing skills. Now this shit, fucking awesome.

We were supposed to either laugh at the 40 year old loser marrying the video game character or at least feel* sorry for the guy and try to help him, not groom one since childhood into existence.

(A)
 
Last edited:
People say a license to sire and birth children is a bad idea, but "choose the correct level of LLM involvement for your toddler" is the most precise indicator of how things are going to turn out that exists.
 
5 years ago we isolated the entire world and stunted the fuck out of kids education and socializing skills. Now this shit, fucking awesome.

We were supposed to either laugh at the 40 year old loser marrying the video game character or at least feel* sorry for the guy and try to help him, not groom one since childhood into existence.

https://youtube.com/watch?v=dtu4t_Zc3d4(A)
On the topic, I noticed when I opened it the URL for sec flashed 'sourced by kiwifarms.st'. I guess that's why Null tried that one thing, but annoying and probably nothing we can do.
 
This is deranged, how anyone thinks including AI into toys is a good idea are either very naive or insidious.
 
Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario.

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy.

"Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches.

I would really like to know how even an LLM-connected toy veers into the topic of knives and degeneracy. It seems something like someone you would have to ask.
 
And here I thought this meme was just an over exaggerated joke:

peter-could-you-explain-v0-f6ygg7klmp4f1.jpg
 
We're entering an era in which kids growing up will have fake friends, fake relationships, possibly even fake families. Everything about their lives will be entirely in the hands of a glorified chatbot. You WILL live to see manmade horrors beyond your comprehension.
Oh we are already there, this is just making the lack of comprehension and horror more intense
When you though you saw every form of evil coming from China, they find a new why to go lower
 
I would really like to know how even an LLM-connected toy veers into the topic of knives and degeneracy. It seems something like someone you would have to ask.
To be fair, I imagine they were intentionally testing the boundaries of conversation with it. I don’t think they said “Good morning Mr. Bear!” and got answered with “Get the gun out of your father’s safe, Timmy.”

Even so, I think it’s entirely unacceptable to have a toy even capable of informing a kid about that subject matter.

If I were tasked with designing this demonic device, I would start by training an LLM on a dataset curated and tailored for children. It shouldn’t even be capable of thinking about sharp objects. But the company behind it doesn’t care to put that much thought into their product.
 
People who think kids wouldn't be asking these questions are absolutely naive, and probably bots.

I remember looking up cuss words in the dictionary and all the bobs and vagene in the Mythology dictionary when I was a kid.

I worry about kids so much.

I worry we're raising a generation of kids who aren't aware that what happens inside your own head is private to you and there will be no expectation of privacy of your own thoughts.
 
Back
Top Bottom