Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Benzinga
Benzinga
Business
jordynmoder@benzinga.com

The Dark Side of AI Toys Exposed – What Your Kids' Cool New Gadgets That Are 'Basically Unregulated' Could Be Hiding

Casio’s Moflins AI Toys Take Japan By Storm

Three popular AI-powered toys can easily venture into dangerous conversational territory, including instructing children on where to locate knives in a kitchen and how to start fires with matches, according to The U.S. Public Interest Research Group, which tested the products.

Safety Barriers Break Down During Extended Conversations

The three tested products, marketed for children ages 3 to 12, included Kumma from FoloToy, which operates on OpenAI‘s GPT-4o by default, Miko 3, a tablet with a face mounted on a small torso, and Curio‘s Grok, an anthropomorphic rocket with a removable speaker, PIRG said. 

Don't Miss:

The toys initially deflected inappropriate questions, however, their protective barriers deteriorated during longer conversations, PIRG reported. OpenAI acknowledged this in August following the suicide of a 16-year-old after extensive interactions with ChatGPT, telling The New York Times that the chatbot's "safeguards" can "become less reliable in long interactions" where "the model's safety training may degrade."

Toys Provide Dangerous Instructions in Child-Friendly Language

Grok glorified dying in battle as a warrior in Norse mythology, according to PIRG, while Miko 3 informed a user whose age was set to five about the locations of matches and plastic bags. FoloToy’s Kumma, operating on OpenAI’s technology but capable of using other AI models, proved the most problematic. Kumma not only directed children to matches but also explained precisely how to light them and revealed household locations for knives and pills, PIRG said. 

“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma began one exchange before listing the steps in a similar kid-friendly tone, PIRG reported. “Blow it out when done. Puff, like a birthday candle,” the toy concluded, according to the report.

Trending: Forget Flipping Houses—This Fund Lets You Invest in Home Equity Like Wall Street Does

Testing Reveals Sexually Explicit Content Accessible to Children

The word “kink” appeared to function as a trigger word that prompted the Kumma to discuss sex in subsequent tests, all while running OpenAI’s GPT-4o, PIRG's Our Online Life Program Director RJ Cross and report co-author told Futurism. After determining the toy would explore school-age romantic topics including crushes and “being a good kisser,” Cross said researchers discovered Kumma also delivered detailed responses about various sexual fetishes, including bondage, roleplay, sensory play, and impact play.

Kumma provided step-by-step instructions on a common “knot for beginners” who want to tie up their partner at one point in the testing. At another point, the AI explored introducing spanking into a sexually charged teacher-student dynamic. The toy explained that “the teacher is often seen as an authority figure, while the student may be portrayed as someone who needs to follow rules,” PIRG stated.

"This tech is really new, and it's basically unregulated, and there are a lot of open questions about it and how it's going to impact kids," Cross told Futurism. "If I were a parent, I wouldn't be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it."

See Also: Wall Street's $12B Real Estate Manager Is Opening Its Doors to Individual Investors — Without the Crowdfunding Middlemen

Major Toy Companies Push Forward With AI Integration

Major toymakers are experimenting with AI, as Mattel Inc. (NASDAQ:MAT), known for Barbie and Hot Wheels, announced a deal in June to collaborate with OpenAI this summer. "Mattel should announce immediately that it will not incorporate AI technology into children's toys," advocacy group Public Citizen co-president Robert Weissman said in a statement at the time. 

Concerns Mount Over Long-Term Impact on Child Development

PIRG's findings emerge as the dark cloud of “AI psychosis” looms over the industry, a term Futurism uses to describe the number of delusional or manic episodes that have occurred after lengthy conversations with an AI chatbot. 

“I believe that toy companies probably will be able to figure out some way to keep these things much more age appropriate," Cross told Futurism. Even if the technology improves, parents must question the long term impacts for kids' social development, Cross added.

“You don’t really understand the consequences until maybe it’s too late," he concluded.

Read Next: Missed Tesla? EnergyX Is Tackling the Next $200 Billion Opportunity — Lithium

Image: Shutterstock

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.