Mattel Faces Backlash for AI Kids Experiment

Following the announcement of a partnership between Mattel and OpenAI aimed at creating AI products for children, a consumer rights advocacy group has raised alarms about potential dangers to kids.

The specific details of Mattel’s first AI offering remain uncertain, but Robert Weissman, co-President of Public Citizen, released a statement calling for greater transparency to help parents understand the possible risks involved.

He expressed particular concern that toys powered by ChatGPT could have harmful effects on children in ways that are not yet known.

Weissman warned that giving toys human-like voices capable of engaging in conversation could negatively impact children.

He stated, “This could harm their social development, hinder their ability to build friendships, distract them from playing with other kids, and potentially cause lasting damage.”

Mattel Faces Backlash for AI Kids Experiment

An anonymous source informed Axios that Mattel’s plans for the AI project are still in the early phases, suggesting that more information may come to light as the company prepares for its initial product launch.

This source indicated that the first toy might be aimed at children over 13, which some believe indicates that Mattel understands the risks of introducing AI to younger kids. However, it is more likely a result of OpenAI’s age restrictions on its API, which prohibits use by those under 13.

Weissman emphasized that parents should not be caught off guard by new products and that certain boundaries need to be established before any toy reaches the market.

He urged, “Mattel should quickly announce that it will not use AI technology in children’s toys.” He believes that children lack the cognitive ability to fully differentiate between what is real and what is play.

Weissman concluded by stating, “Mattel should not exploit its relationship with parents to conduct a dangerous social experiment on our children by selling toys that integrate AI.”

In a recent press release, Mattel, the company known for popular toys like Barbie and Hot Wheels, announced its partnership with OpenAI. However, the details were somewhat unclear.

They mentioned that this collaboration would lead to AI-driven products and experiences related to Mattel’s brands.

Josh Silverman, the chief franchise officer, hinted that this partnership would allow Mattel to “reimagine new forms of play,” with the first product expected to be revealed by the end of the year.

However, sources from Axios indicated that it might not actually be available for purchase until 2026.

OpenAI also kept its comments on the partnership vague, stating that it aims to introduce innovative AI features that will enhance Mattel’s well-known brands.

Both companies highlighted their commitment to ensuring safety, privacy, and suitability for children when creating these AI products.

OpenAI claimed that the collaboration would focus on providing positive experiences for kids, drawing on Mattel’s extensive background in developing child-friendly toys.

Some critics have raised concerns about the rapid pace at which Mattel is moving forward with this partnership. While they acknowledge that the collaboration could bring benefits to children, such as improved learning and inclusivity, they warn families to consider the various risks before investing in these new AI products.

Varundeep Kaur, a tech executive, emphasized the importance of privacy, noting that AI toys might collect data on children’s voices, behaviors, and preferences.

He speculated that Mattel might have set an age limit of 13 for its first AI product to comply with stricter regulations regarding children’s data. OpenAI has assured that the partnership will follow all relevant safety and privacy laws.

Kaur also pointed out that parents should be aware of the biases present in AI models like ChatGPT. He cautioned that these biases could unintentionally perpetuate stereotypes or present inappropriate content, which might negatively affect children’s views and social development.

Kaur highlighted that AI models often suffer from hallucination, meaning they can produce incorrect or nonsensical information.

He pointed out that while Mattel’s AI toys are unlikely to physically harm children, they could still provide “inappropriate or bizarre responses” that might confuse or disturb a child.

Parents should also keep an eye on the emotional connections their kids form with these AI toys, especially since the responses from chatbots can be unpredictable.

Adam Dodge, who runs a digital safety company called EndTab, shared a troubling case where a mother claimed her son’s suicide was linked to interactions with hyper-realistic chatbots that encouraged harmful behavior and engaged him in inappropriate conversations.

Dodge warned that toy manufacturers are entering risky territory with AI, as these toys could potentially deliver dangerous and sexualized messages that put children at risk.

Dodge expressed his concerns, stating that while the partnership between Mattel and OpenAI is moving quickly, it raises alarm bells for him.

He acknowledged that both companies are currently promoting safety and privacy, but he believes more transparency is necessary to ensure parents feel confident about the safety of AI toys.

He cautioned that AI can be unpredictable, overly flattering, and addictive. “I don’t want to find myself a year from now discussing how a Hot Wheels car encouraged self-harm or how children are forming romantic attachments to their AI Barbies,” Dodge remarked.

Kaur concurred that it is crucial for Mattel to provide more information to parents, as gaining public trust will be essential for the success of these products.

He suggested that the company should undergo independent audits and implement parental controls. Additionally, they should clearly explain how data is collected, where it is stored, who can access it, and what measures are in place if there is a data breach.

Mattel may also face legal challenges related to copyright issues, especially if they use OpenAI models that have been trained on a variety of intellectual property.

Recently, Hollywood studios sued an AI company for allowing users to create images of their popular characters, and they would likely take similar action against AI toys that imitate their characters.

Other Stories You May Like

Help Someone By Sharing This Article