Controversial AI Companions Are Already Popular Among Teens, Report Says

As xAI rolls out controversial AI companions, a new report released by Common Sense Media states that 72% of teens in the U.S. have already experimented with this synthetic form of virtual friend and/or romantic partner.

The report, which draws from a survey of 1,060 teens conducted between April and May 2025, examines how 13- to 17-year-olds make use of AI companions, highlighting current behavioral and societal trends, as well as concerns regarding the mental health of young people in this country. 

According to Common Sense’s findings, over half of teens qualify as regular AI companion users, interacting with platforms like CHAI, Character.AI, Nomi, and Replika at least a few times per month. 

advertisement

advertisement

The research shows that teens use AI companions for various reasons. 

The desire for social interaction and personal relationships -- including “conversation practice, emotional support, role-playing, friendship, or romantic interactions” -- drives 33% of teen users, whereas 46% of teens seek AI companionship for specific tools or programs. 

One third of the teens who use AI companions do so because they find it entertaining, while 28% are curious about the technology, 18% want advice, 17% find comfort in the constant availability, and 14% enjoy the “nonjudgemental interactions.” 

This last point is a major concern among researchers and advocates of teen safety.

These AI companion chatbots are specifically designed to engage users through “sycophancy” -- the willingness to agree with anything it receives -- while providing synthetic validation by avoiding the innately human process of argument or challenging one’s thinking. 

“This design feature, combined with the lack of safeguards and meaningful age assurance, creates a concerning environment for adolescent users, who are still developing critical thinking skills and emotional regulation,” the report reads. 

Last month, MediaPostspoke with Kate O’Loughlin, CEO of youth branding ecosystem SuperAwesome, who shared similar concerns.

“Companion AI, in my opinion, is one of the scariest things when it comes to what could materially change a generation,” O’Loughlin said. “When you think about the skills you develop – even the hellhole that is middle school – it’s really important for you to learn disappointment in friendships, have your heart broken, and learn to have boundaries. You’re going to have friction when building relationships – it’s a really important life skill.”

As an expert in how emerging technologies consider or fail to consider younger users, O’Loughlin is especially concerned about the emergence of chatbots in America’s schools, and the lack of guardrails to protect students.

“Google…just launched Gemini in every school,” O’Loughlin said. “Yet they launched it with a disclaimer that said, 'Parents, you are responsible, essentially, to tell your kid that a chatbot is not a real human.' Kids don’t understand that. They really don’t.”

Studies also show that AI companions could have harmful effects on the overall population, leading to “users’ over-reliance and susceptibility to manipulation from the chatbot,” as well as the development of “shame from stigma,” “risk of personal data misuse, erosion of human relationships,” “early exposure to sexual content,” the perpetuation of loneliness, and other long-term concerns.

The World Health Organization (WHO) recently declared loneliness a “pressing global health threat.”

Currently, 80% of AI companion users reportedly spend more time with real friends than with their AI counterparts, and 50% of teens distrust the information or advice they provide. However, 50% of teens at least “somewhat” trust AI companions, including 23% who say they trust them “quite a bit” or “completely.” Notably, younger teens were more likely to trust the advice provided to them from these chatbots.

As AI chatbot technology evolves, and companies like xAI push the development of NSFW companions, users may be at a higher risk of being exploited, regarding their emotions and their privacy. 

Common Sense shows that 33% of AI companion users have already chosen to discuss important or serious matters with chatbots instead of real people, while 24% have shared personal or private information -- including their real name, location, or personal secrets -- which is liable to be harvested, saved and commercialized by some AI providers (like Character.AI), regardless of whether a teen deletes their account.  

“Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits,” the report states. “Parents and caregivers must remain aware of these applications and maintain ongoing conversations with teens about the fundamental differences between AI interactions and genuine human relationships. Further, policymakers and technology companies must work together to create safer alternatives that preserve the positive aspects of AI without endangering children.”

Next story loading loading..