Commentary

Warning: CARU Will Strictly Enforce New Guidelines Covering AI-Generated Kids Ads

The rapidly evolving field of AI-generated advertising not surprisingly is raising new ethical questions and business practice issues for advertisers and agencies. What is surprising is that there has been no standard codified to protect impressionable children who might not otherwise have the faculties to distinguish synthetic, machine-generated content, as well as the relative truth inherent in it.

Until now.

This morning, the Children's Advertising Review Unit (CARU) of the BBB National Programs issued a new compliance warning covering the use of AI in its advertising and privacy guidelines.

"The CARU compliance warning puts advertisers, brands, endorsers, developers, toy manufacturers, and others on notice that CARU’s Advertising and Privacy Guidelines apply to the use of AI in advertising and the collection of personal data from children," the unit said in a statement released this morning, along with detailed updated guidelines, which can be downloaded and read here.

advertisement

advertisement

The warning states that CARU will strictly enforce its guidelines in connection with the use of AI and the potential risks that its use may pose in terms of manipulative practices, including influencer marketing, deceptive claims, and privacy practices.

The warning emphasizes that marketers should be particularly cautious to avoid deceiving children about what is real and what is not when engaging with realistic AI-powered experiences and content.

Specifically, it states that brands using AI in advertising should be particularly cautious of the potential to mislead or deceive a child in the following areas:

  • AI-generated deep fakes; simulated elements, including the simulation of realistic people, places, or things; or AI-powered voice cloning techniques within an ad.
  • Product depictions, including copy, sound, and visual presentations generated or enhanced using AI indicating product or performance characteristics.
  • Fantasy, via techniques such as animation and AI-generated imagery, that could unduly exploit a child’s imagination, create unattainable performance expectations, or exploit a child’s difficulty in distinguishing between the real and the fanciful.
  • The creation of character avatars and simulated influencers that directly engage with the child and can mislead children into believing they are engaging with a real person.
Next story loading loading..