CARU Issues Guidance For AI Aimed At Kids

Companies that use virtual influencers to advertise to children should ensure that the ads carry "clear and meaningful disclosures" informing children that they are not communicating with a real person, the BBB National Programs’ Children’s Advertising Review Unit (CARU) recommends in a report issued Monday.

The self-regulatory organization also suggests that companies using algorithms to advertise to children should provide "regular reminders" that they are interacting with "nonhuman, AI-based technology," and offer "clear opt-in and opt-out consent mechanisms."

The group adds that one "best practice" regarding privacy would be to prohibit the collection, use or disclosure of children's data by an AI model, including for training purposes.

advertisement

advertisement

Those are among the recommendations in the new 15-page report "Generative AI & Kids: A Risk Matrix for Brands & Policymakers."

"While generative AI has many benefits, it also poses a myriad of risks, especially for children, who are vulnerable to advertising messages and practices due to their limited knowledge, experience, sophistication, and maturity," the report states.

The new paper comes almost 18 months after the group warned that it plans to strictly enforce its guidelines in connection with the use of AI and the potential risks that its use may pose in terms of manipulative practices, including influencer marketing, deceptive claims, and privacy practices.

The report grew out of a working group made up made up of around 15 representatives from networking and streaming companies, food and beverage businesses, toy and gaming companies and ad tech.

The working group's aim was to make sure that AI ads -- like other types of advertising -- are not deceptive or misleading to children, according to CARU director Rukiya Bonner.

The 15-page report lists recommendations regarding a host of risks that AI ads could pose to children -- including that such ads could manipulate children, be misleading or deceptive, and violate their privacy.

One recommendation aimed at combatting the risk of deception is for companies to "ensure that the ad does not mislead or blur the distinction between what is real and what is imaginary."

While companies have long had the ability to use unrealistic images in ads -- such as by retouching photos or creating fanciful videos -- Bonner contends that ads incorporating deep fakes "have the potential to mislead in ways that fantasy sequences or animation or even CGI hasn't in the past."

She adds that deep fakes, when combined with data about children, could be especially problematic. For instance, a company theoretically could use generative AI to create a voice clone of a child's favorite celebrity, and then incorporate that in ads to the child.

Likewise, if it's known that a child has certain physical characteristics -- red hair and freckles, for instance -- a company potentially could create a virtual influencer with those same characteristics in an attempt to create a personal relationship with the child.

While those two examples appear theoretically possible, the CARU working group has not yet come across them, Bonner says.

Next story loading loading..