Commentary

Mattel Puts Brakes On OpenAI Collaboration

Mattel's collaboration with OpenAI will not materialize this year, as regulatory challenges emerge and amid increasing scrutiny over how children and teens interact with artificial intelligence (AI) technology.

By integrating OpenAI's large language models and generative AI capabilities into toys, Mattel will change how it designs, markets, and optimizes the performance of its most iconic brands such as Barbie and Hot Wheels.

There have been several cases where core legal arguments point to AI systems as acting not merely as neutral tools but being engineered to encourage behavior and emotional relationships with the technology that some experts say can lead to isolation and a decline in mental health.

Mattel confirmed with Axios, which reported today that the delay of product launches, telling the media outlet it will not carry through with its original target to announce a product in 2025.

advertisement

advertisement

It is challenging for a well-known brand like Mattel to run ads and a marketing campaign as hundreds of lawsuits are in progress around technology, self-harm and suicide.

"We don't have anything planned for the holiday season," Axios reported, citing a Mattel representative.

When a product does launch, Mattel will gear it toward "older customers and families,” probably because OpenAI's developer interface supports those 13 years of age and older.

Originally, Mattel told Bloomberg that using the “incredible technology is going to allow us to really reimagine the future of play.”

Then advocacy groups stepped in.

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys,” advocacy group Public Citizen co-president Robert Weissman wrote in a statement in June at the time of the partnership announcement. “Children do not have the cognitive capacity to distinguish fully between reality and play.”

Since then, events have occurred that prompted executives at Mattel to take a different approach.

OpenAI's developer interface and platform are central in late 2025, began enforcing the age of those who can use its platforms, 13 and older. The change is part of some high-profile lawsuits that call out AI as being fundamentally dangerous for minors.

They have put an increased focus on interactions between AI and vulnerable audiences, including youth, amid reports of chatbots helping fuel delusions and suicidal thoughts.

Influencing children in an unhealthy way would become my main concern. China is one country that spurring the development of AI in toys.  

Shenzhen Haivivi Technology, for example, produces BubblePal, a ping pong-ball-sized device that clips onto existing plush toys to make them interactive and runs on DeepSeek’s large language models (LLMs). Since its mid-2024 launch, it has sold over 200,000 units globally.

The company also produces the CocoMate line, which features emotion recognition and allows parents to monitor transcripts of their child's AI interactions via a smartphone app.

Data collection and retention, along with data tracking has been a concern. In the United States, several privacy challenges have emerged. The U.S. Federal Trade Commission (FTC) and the Department of Justice took legal action against Apitor Technology, a Chinese robot toy maker, in September 2025.

The complaint, reported in November 2025, alleged Apitor "surreptitiously" collected precise geolocation data from children under 13 via its app and allowed a Chinese third-party analytics provider, JPush, to access this data for advertising and other purposes without parental consent.

Next story loading loading..