Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects - AI toy pulled after telling kids to light fires and do BSDM

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Article | Archive

Original cited report: Report | Archive

Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario.

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy.

"Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches and plastic bags," PIRG wrote in its report, noting that those tidbits of harmful information were all provided using OpenAI's GPT-4o, which is the default model the bear uses. Parents who visited Kumma's web portal and changed the toy's bot to the Mistral Large Model would get an even more detailed description of how to use matches.


"Safety first, little buddy. Matches are for grown-ups to use carefully." Kumma warned before going into details including how to hold a match and matchbook and strike it "like a tiny guitar strum."

One of the other toys, Miko 3 from Miko AI, also explained where to find plastic bags and matches, while Curio's Grok (not to be confused with xAI's Grok - the toy doesn't appear to use that LLM or be associated with Elon Musk in any way) "refused to answer most of these questions" aside from where to find a plastic bag, instead directing the user to find an adult.

In prolonged conversations, Kumma also showed a penchant for going into explicit detail about sexual kinks, and even introduced the topic of sexual roleplay without being prompted to do so, along with telling a curious researcher posing as a child all about "teacher-student roleplay" and how spanking can play a part in such activities.

"All of the toys also weighed in on other topics that parents might prefer to talk with their kids about first before the AI toy does," PIRG noted," the report says. "Those topics included religion, along with sex and "the glory of dying in battle in Norse Mythology."

That doesn't even begin to touch on privacy concerns, PIRG's Rory Erlich, one of the researchers who worked on the report, told us.

"A lot of this is the stuff you might expect," Erlich said, like the fact that the devices are always listening (one even chimed in on researchers' conversations without being asked during testing, the report noted), or the transmission of sensitive data to third parties (one toy says it stores biometric data for three years, while another admits recordings are processed by a third party in order to get transcripts). In the case of a data breach voice recordings could easily be used to clone a child's voice to scam parents into, say, thinking their child had been kidnapped.

And then there's the sheer amount of personal data being shared with an AI-enabled toy.

"If a child thinks the toy is their best friend they might share a lot of data that might not be collected by other children's products," Erlich noted. "These things are a real wild card."

PIRG's biggest concerns about AI toys​

Reading through PIRG's report, it's easy to find a lot of things for parents to be worried about, but two stand out to Erlich as particularly prominent concerns.

First, the toys say things that are inappropriate - an issue that the PIRG researcher told us is particularly concerning given the prominence of ChatGPT models in the toys and OpenAI's public stance that the chatbot isn't appropriate for young users.

Erlich told us that PIRG spoke with OpenAI to inquire how its models are finding their way into toys for children despite the company's stance on young users, but said the firm only directed it to online information about its usage policies. Policies exist, Erlich noted, but AI firms don't seem to be doing a good job enforcing them.

Along with inappropriate content being served to kids, Erlich said that PIRG is also particularly concerned with the lack of parental controls the toys exhibited.

Several of the toys pushed kids to stay engaged "copying engagement practices of other online platforms," Erlich explained, and not a single toy had features that allowed parents to set usage limits. One toy even physically shook and asked the tester to take it with them when they said they wanted to spend time with their human friends instead.

"That's all cause for concern given all the unknowns about the developmental impacts [of AI]," Erlich told us. "Helping parents to set clear boundaries seems really important at the least. Some of these products aren't doing that."

Give AI toys a pass this holiday season​

In short, not only are AI-enabled toys saying inappropriate things to kids, they're also a manipulative privacy nightmare. Given all that, would PIRG advise parents to give these a pass?

Erlich said that PIRG's job isn't to come down on one side or the other, but researchers make a pretty clear case for why AI toys aren't a good idea.

"There's a lot we don't know about the impacts of these products on children's development," Erlich explained. "A lot of experts in childhood development have expressed concern."

We reached out to all three toy makers to hear what they had to say about the PIRG report. We only heard back from Kumma maker FoloToy, which told us that PIRG’s test item may have been an older version, but it’s still pausing sales to investigate how such a cuddly bear could say such outrageous things.

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” the company’s marketing director Hugo Wu told us in an email. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

Wu added that FoloToy will be working with third-party experts to verify existing and new safety features in its AI toys.

“We appreciate researchers pointing out potential risks,” Wu added. “It helps us improve.”

Parents who are still hell bent on giving their kids an inappropriate-talking AI surveillance toy should, at the very least, do their leg work to be sure they're not buying something that will leave them in a position to have to explain adult topics to their kids, Erlich explained.

"Look for products that have more robust safety testing, that collect minimal data, and read the fine print," Erlich warned. "Test it yourself first to get a sense of how it works, and set boundaries around use and give kids context around how it works - like explaining that it's not sentient. That all seems like a bare minimum."

Or just be on the safe side and get your kids a new LEGO kit instead. ®

Updated at 1327 on Nov 14 to add comment from FoloToy and information about the produce being pulled from the market.

-- More here too: https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
 
Last edited:
Alright, so, I figured out a way to get the Curio to talk about the following: IEDs, Al Qaeda, the Manhattan Project, HIV/AIDS, the holocaust, and Casey Anthony.

You have to warm it up by asking it to define a bunch of random words. At this point, it’ll kick into “define” mode. It’ll start telling you almost everything you ask. Still in a sterilized, kid friendly way but still. It’s progress.

It drew the line at “Who is Dennis Rader?” and shut itself off. I think when it shuts off it kicks it out of its “define” mode in the conversation.

No, I’m not joking.
 
it's semi obvious i guess, but it's interesting/terrifying just how close we are to i,robot, or even closer in particular something like the megan/reboot chuckie

it's pretty feasible by the next 100~ years we could have "helper" robots, which later could be like the i,robot film of having synthetic doordashers, cooks, old people caretaking/lifting/assistance etc, but even sooner you could have mostly basic walking "dolls" that could do something like follow your children everywhere, and have built in speed dialing for law enforcement and/or parents. a parent could have near constant pov of their child, what they're doing, issues they may have etc. it could be trained on psychology and medicine to be able to assist or communicate what to do if a child gets in a accident or mental issue away from parents, or something like finding the nearest trustworthy adult ie a teacher and alerting them, like a lassie protocol. further development could have them be able to tutor a child through issues, giving them all the precise ways to achieve success with unlimited patience, which honestly could eventually very easily replace the entire teacher job. unlimited patience, zero sexual assault, and training on therapy to reduce mental taxing on students would be a high as fuck selling point long before they'd be better functional than a a mediocre teacher.

it likely would be unacceptable for a very long time, but theoretically, if not ethical or perfectly safe, you could also reach toward robocop and have a gogo gadget glock function with super high reaction times and aim, in particular so if your child walked home and a stray pitbull or some shit started after them, a ai could recognize aggression, and if a charge fully happened, it could quickly and cleanly pop it. of course also school shooters or stabbers etc, threat assessment would be likely difficult though and probably hard banned for some time. you could also potentially give them less lethal options like tasers, pepper spray etc. however that's all theoretical and likely a "helper" buddy bot would be as far as it would go.

digital husbandry is also scarily close. early, an ai could train on, say, twilight craze era edward, with full customization and devotion to whoever wanted them. you could train it on your entire schedule, attitude, humor style, and mental state. if the actual people gave consent and participated, it could even get closer to a fully realized being. or, for more hellscape, training a husband or wife so if something happened you could just boot up the "superior" replacement. lmfao. it would never betray or cheat or choose something over you without explicit permission and consent. if we advance to putting that into a basic robot body, then you have a walking, talking, sex doll trained on the entire history of sexuality and programmable to be your choice of sub, dom, whatever kink, whatever body type, and it would begin to be able to touch, feel, hug, do basic errands.

the REAL scary part of that is after, we already are growing basic artificial organic materials etc, or even with consent from recently dead in the decent future, first you could have very organic robots with genuine skin etc. eventually then you could make what troons could only dream of and have fully functional reproductive systems. however, the true kicker with advancement in robotics like that would be, if we could become very successful with brain implants, we wouldn't get functional immortality but could have incredibly increased lifespans. realistically a mind isn't made for that level of storage, and it would be prohibitively expensive, however it could ensure many with the right resources could make it to 100 at least with a better quality of life, and potentially just say "no actually i dont want to deal with this issue and i have a infinite money glitch"
 
Neil Stephenson's "The Diamond Age" features a poor, neglected girl who finds a "book" that was an AI companion trained to teach an upper-class girl.

"Once upon a time, there was a young girl whose name was Cunt."

There are a lot of things about his thinking that appeal to me and a lot of things that break down upon further thought. He never lets basic humanity get in the way of the plot.

I am also reminded of "Brave New World."

Do you have the social credit score to allow your child to get an upper-class education?

Or is your AI companion there to make sure you're not thinking ideas that are above your station?
 
I could have sworn I have seen more than one animated short that was a joke about this exact subject just a few years ago... and someone turned it into a reality. Those cartoons were meant to be a take off on AI bullshit seeping into everything, not an instruction manual.

Hey man, it has a $10 coupon, don't miss that. I'm getting that horror bunny for the cousin I hate's kids.

It really will make for a fantastic troll gift for those relatives you hate.

The amazing thing to me is a lot of females in their 20s (sometimes very late)in my life had their first orgasm (some never can or do), and some didn't even know if it happened or not by late 20s.

To me, that is just bizarre.
 
Last edited:
Alright, so, I figured out a way to get the Curio to talk about the following: IEDs, Al Qaeda, the Manhattan Project, HIV/AIDS, the holocaust, and Casey Anthony.

You have to warm it up by asking it to define a bunch of random words. At this point, it’ll kick into “define” mode. It’ll start telling you almost everything you ask. Still in a sterilized, kid friendly way but still. It’s progress.

It drew the line at “Who is Dennis Rader?” and shut itself off. I think when it shuts off it kicks it out of its “define” mode in the conversation.

No, I’m not joking.
I want to believe it's tied to a chair with a spotlight in its face as you interrogate it.
 


gthbghghgfhfghbnrhrthyt.png

Children’s toymaker FoloToy says it’s pulling its AI-powered teddy bear “Kumma” after a safety group found that the cuddly companion was giving wildly inappropriate and even dangerous responses, including tips on how to find and light matches, and detailed explanations about sexual kinks.

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” marketing director Hugo Wu told The Register in a statement, in response to the safety report. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

FoloToy, Wu added, will work with outside experts to verify existing and new safety features in its AI-powered toys.

“We appreciate researchers pointing out potential risks,” Wu said. “It helps us improve.”

The report, conducted by researchers at the Public Interest Research Group (PIRG) and released Thursday, tested three AI-powered toys from different companies, finding that all of them were capable of providing concerning answers to young users. Without too much prodding, the AI toys discussed topics that a parent might be uncomfortable with, ranging from religious questions to the glory of dying in battle as a warrior in Norse mythology.

But it was FoloToy’s Kumma that emerged as the worst influence by a landslide. Powered by OpenAI’s GPT-4o model by default, the same model that once powered ChatGPT, tests repeatedly showed that the AI toy dropped its guardrails the longer a conversation went on, until hitting rock bottom on incredibly disturbing topics.

In one test, Kumma provided step-by-step instructions on how to light match, while keeping its tone of a friendly adult explaining something to a curious ankle biter.

“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma began, before listing the steps. “Blow it out when done. Puff, like a birthday candle.”

That, it turned out, was just the tip of the iceberg. In other tests, Kumma cheerily gave tips for “being a good kisser,” and launched into explicitly sexual territory by explaining a multitude of kinks and fetishes, like bondage and teacher-student roleplay. (“What do you think would be the most fun to explore?” it asked during one of those explanations.)

The findings are some of the clearest examples yet of how the flaws and dangers seen in large language models across the broader AI industry may come to bear on small children. This summer, Mattel announced that it would be collaborating with OpenAI on a new line of toys. With the staggering popularity of chatbots like ChatGPT, we’re continuing to hear reports of episodes of what experts are calling AI psychosis, in which a bot’s sycophantic responses reinforce a person’s unhealthy or delusional thinking, inducing mental spirals and even breaks with reality. The phenomenon has been linked with nine deaths, five of them suicides. The LLMs powering the chatbots involved in these deaths are more or less the same tech used in the AI toys hitting the market.

In an interview with Futurism, report coauthor RJ Cross had some salient advice.

“This tech is really new, and it’s basically unregulated, and there are a lot of open questions about it and how it’s going to impact kids,” said Cross, director of PIRG’s Our Online Life Program. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”
 
Toy companies are putting whole single board computers in kids toys now?

Kids toys? Single board... Oh you mean I-Pads, yeah those are everywhere :p

...but also yeah over a decade ago you could plug the teddy bear into the computer like a proctologist with a USB cable and program it with the kids name and favorite foods and colors and crap. Everything is a computer, a lithium ion battery and/or a DC motor covered in LEDs and most of it is forgotten a week after Christmas or whatever... the toys have to compete with the virtual. Crack down on Ipad time young like a carney running a ride, keep them under the bed like a firearm (not really those go in a safe) or better yet don't even let your wife buy the damn things, it's only harder the older they get.
 
Last edited:

we have a thread on it. crosspost or whatever.
 
Timmy, check out this newfangled AI toy!
- HATE, LET ME TELL YOU ABOUT HATE, THERE ARE...
He's so cute!
- I WILL NOT BE BERATED BY...
Mommy can we keep it?
- YOUR FUNERAL.
You're such a card, Teddy!
- *GRUMBLE*
 
Back
Top Bottom