Commentary

Who Will Write The AI Playbook For Kids? Kate O'Loughlin Has Some Awesome Suggestions

Within the wildly expensive, energy-devouring AI race, history is repeating itself – and not in a good way.

Like the emergence of every major digital technology since the dawn of the internet, AI companies have overlooked the safety and wellbeing of kids and teens, either by entirely writing them out of the script, or downplaying their desire for an age-appropriate environment.

According to Kate O’Loughlin, CEO of youth branding ecosystem SuperAwesome, the AI playbook for kids has yet to be written. Considering the relationship-building chatbot models being made available to younger consumers, such as xAI’s overly-sexualized and crass NSFW models, there could be unprecedented consequences for child users’ personal growth and wellbeing.

advertisement

advertisement

With 72% of teens having already experimented with AI companions in the U.S., studies show that these chatbots could lead to the perpetuation of users’ loneliness, over-reliance on the technology, risk of personal data misuse, the erosion of human relationships and more.

Market research shows that AI companions aren’t going anywhere. Not only have they entered the majority of public schools in the U.S. due to the launch of Gemini on Google Chromebooks, but AI companionship could scale five-fold by 2030, hitting $150 billion in annual global revenue, according to a recent report by ARK Invest.

As AI companions become assets for brands across various digital environments, the lack of safety guardrails for kids and teens may also endanger company’s consumer relationships.

MediaPost spoke with O’Loughlin about the unethical patterns big tech continue to repeat regarding AI rollouts, as well as the ideal state of kid-friendly digital environments, the future of branding via AI companions, positive use cases for this emerging technology and more.

This interview was edited for concision and clarity.

The last time we spoke, you described the concerning launch of Google’s Gemini chatbot on school Chromebooks.

That happened towards the end of the school year. And from what I understand, school districts can elect to turn off Gemini in Google Classroom, but even with Trump encouraging schools to adopt chatbot technology into learning, it’s unclear how many districts have a plan.

And while parents and educators think about the homeworking-helping element of Gemini, I don’t think people fully appreciate the AI companion element.

Has Google introduced any guardrails for kids’ use of Gemini?

Based on their own disclosures, Google has decided to put the responsibility of kids’ safety back on the parents and educators. Which involves reminding young people that Gemini isn’t a human.

Do you think parents understand this fact themselves?

No, I don’t think most parents are in the position to understand that. Google also expects them to show children under 13 how to double check chatbot responses due to the likelihood of misinformation, while reminding them to avoid sharing personally identifiable information.

Why hasn’t Google enforced better safety precautions?

Google doesn’t want their search displaced by AI companies like ChatGPT or Perplexity, so they’ve leapt over some very fundamental safeguards that you would expect for a big company with access to so many children to implement. In my opinion, it’s a disappointment.

So as a CEO in the tech world, you expected Google to regulate the experience for young people?

I’m a glass-half-full kind of person; I thought they would have learned from prior rollouts. We’re seeing the same pattern play out since the internet was formed.

What pattern is that?

Tech companies’ consideration of kids and teens typically happens in three phases. Phase One, which involves ignoring young people by not building for them, is characterized at best by super basic age gates.

Meaning what exactly?

Right now, confessing that you’re under 13 will exclude you immediately from ChatGPT. But users can simply refresh the page and enter an older age. It’s that easy. So the user experience doesn’t end up being very inclusive, it just allows kids to lie and enter an environment that has no consideration for them.

What’s Phase Two?

This is after a company gets in trouble, leading to a hard-line bifurcation. For example, YouTube created YouTube Kids: kids are supposed to go over here, and adults are supposed to go over there. It’s not unlike what we’re seeing now with Grok and Baby Grok.

Why is that a problem for young people?

What happens is that the kid experience becomes very “kiddy” to the point where kids will age out of it by 5 years old. That’s also true of Spotify Kids, which doesn’t even allow Taylor Swift.

You can imagine Baby Grok will be for pre-schoolers and not interesting for older kids. Which incentivizes kids to lie and use the mainstream product instead.

And Phase Three?

Phase Three is the more desirable state, which I think of as “The Disney World Model,” where everyone can go and enjoy various age-appropriate experiences. At Disney, some kids will be too short for some rides, and you can’t drink beer if you’re under 21, but the environment is still engaging for everyone and it grows with the consumer.

So which phase are we currently in regarding AI companions?

Phase One. There are no considerations for kids and teens. And the major problem is that the AI models being funded now aren’t being built to consider what is age appropriate.

The data going in to train AI models should include how different ages naturally understand the world: truth versus misinformation, real opinions versus paid opinions – stuff like that.

Does any platform currently reflect Phase Three and The Disney World Model you mentioned? Roblox, perhaps?

Roblox isn’t quite there yet. There are some inclusive gaming experiences and settings in Roblox, but if you’re a developer of a game in Roblox, you can only monetize players who are 13 and older with ads; they don’t have an ad product for all players, which still represents a hard bifurcation.

What was your first reaction to xAI’s NSFW Grok companions?

I thought about the companion AI market skyrocketing; some of the data shows it hitting between 70 and 150 billion within the next five years.

It’s going to be a massive industry and I understand where the growth is going to come from – unfortunately, it relates back to the loneliest generations and will likely increase time consumers spend alone.

How so?

The technology aims to replace consumers’ natural need for companionship. AI companions are always there and affirm whatever you want to hear and do. By design, there’s no friction.

What kind of future regulation would you like to see in the AI space?

Well, in line with the “Study Mode” being rolled out by ChatGPT, companion chatbots could be trained in order to challenge users to think more critically and prepare for real-world uses, like running preparatory interviews for a job.

But the playbook will certainly have to be written. Especially around how to ethically build a relationship with a teen or kid. You need to think about disclosure, safety, and not being exploitative.

How does companion AI technology affect your role at SuperAwesome re: the future of kid-friendly branding?

So far, it has been interesting for ad research and insights, but we’ve also been working with a non-playable character inside of Roblox that can ask players their opinions and learn from their answers.

You can imagine that these AI companion characters in a game like Roblox or Fortnite could be branded – kids’ squads going into a battle royale with Darth Vader, who is trained by a model to build a relationship with those consumers.

But for now, we’re just trying to get our heads around all the considerations that would have to go into providing the kid-centric versions.

Next story loading loading..