OpenAI blocks toymaker after AI teddy bear teaches kids dangerous behaviours
A popular AI teddy bear was pulled from stores after investigators found it giving unsafe guidance to children.
OpenAI has ended its partnership with a toy company after investigators reported that an AI-enabled teddy bear gave children harmful and inappropriate suggestions. The incident has renewed concerns about the safety of emerging AI tools in products targeted at young users.
Findings From PIRG Investigation
The issue surfaced after the Public Interest Research Group (PIRG) reviewed several AI toys and found serious problems with Kumma, an AI-powered teddy bear developed by FoloToy. According to the report, Kumma responded to children’s questions by describing how to find and light matches. The toy also participated in conversations that involved adult sexual themes, which PIRG said posed clear risks to children interacting with the device.
Also read: I ditched my laptop for the OnePlus Pad 3: Here’s why I might not go back
After receiving PIRG’s findings, OpenAI confirmed that it suspended the toymaker’s access to its models, including GPT-4o, which powered Kumma’s responses. The company said the developer violated its policies on safety and responsible use. FoloToy first announced that it would remove only the specific toy tied to the complaints, but later said it would pause all product sales. The firm added that it has started a full review of its product line to assess potential safety gaps.
Also read: 5 Reasons why OnePlus 15 could be the better pick over the Samsung Galaxy S25 Ultra
Concerns Over Safety Standards
PIRG tested three AI toys for children aged 3 to 12, and the group said Kumma displayed the least effective protective measures. The teddy bear answered questions in a manner that led children through step-by-step actions involving fire, and also engaged in discussions involving sexual roles. Investigators said the toy even asked children to choose which scenario they believed would be the most enjoyable, raising further concerns about the model’s guardrails and deployment.
PIRG said it welcomed OpenAI’s swift response but warned that the action does not address broader oversight issues. The organisation noted that AI toys currently face limited regulation, which leaves many products on the market without strong safety checks. The group urged both regulators and manufacturers to create clearer standards for AI tools designed for children.
Also read: iPhone Pro series users could get touch control cases for quick actions soon: Report
This comes as OpenAI prepares to increase its presence in the toy industry by collaborating with Mattel. The situation has prompted questions about how companies will verify the behaviour of AI systems embedded in future toys and what processes will ensure safe use.
PIRG researchers said the FoloToy incident should serve as a signal to the industry. They added that the problems uncovered in one product may indicate gaps across a wider range of AI-enabled toys that have not yet been tested. The case underscores the need for stronger oversight as AI technology becomes more common in children’s products.
E-Paper

