Analyst Advocates NARB Self-Reg Model For Generative AI Advertising

Self-regulation seems to be a massive and highly complex endeavor. But one Gartner analyst believes the advertising industry can accomplish the task for generative artificial intelligence (GAI).

“One of the most notable achievements of the advertising industry’s self-regulation was the creation of the National Advertising Review Board in 1971,” Gartner Analyst Andrew Frank wrote in an opinion piece, with a copy sent to Media Daily News.

The NARB is a voluntary body of reviews and adjudicates cases of deceptive or unfair advertising practices, brought by consumers, competitors, and even NARB’s staff. Decisions made by NARB members are binding, published and monitored for compliance.

At a glance, the group’s structure may appear to be similar to the Interactive Advertising Bureau (IAB). The board consists of representatives from advertisers, agencies, media, and the public and operates under the principles of transparency, accountability, and due process.

advertisement

advertisement

With the rapid growth of GAI, Frank says, the advertising industry faces a challenge similar to the one it faced decades ago. It has the potential to provide immense value and benefit, but also poses significant risks and ethical dilemmas.

Foundational large language models, (LLMs) create the building blocks of GAI applications and are offered as a service by cloud providers. Frank says these models are not transparent, reliable, or accountable, and can generate harmful or misleading content that would affect individuals, groups, and society.

Frank believes the industry requires a self-regulatory body that reviews and approves the safety of these models before they are made commercially available. The body would establish and enforce standards and best practices, testing, and auditing of foundation models, and disclosure and mitigation potential harms. And similar to other bodies, would provide a way to make complaints and resolve disputes.

Does self-regulation work? “No system is perfect,” but Frank believes these bodies are “clear examples of self-regulatory success, especially in partnership with FTC.”

Thousands of cases have been adjusted during the past years, and the vast majority are resolved without escalation, he explained.

“I think it’s fair to say that however you feel about the current state of truth in non-political advertising, it would be far worse without self-regulation,” Frank said.

When asked whether he believes the advertising industry can be trusted to self-regulate, Frank paraphrased Winston Churchill on the topic of democracy, when he spoke about the worst form of government.

As Frank sees it, one alternative to self-regulation is to remain in the status quo, where “giant AI labs police themselves with minimal transparency or oversight, saying in essence 'trust us' as they pursue GAI as rapidly as possible.”

Then there is the " 'pause' scenario called for by the Future of Life Institute last year, which has been ignored and pretty much forgotten” -- and the idea of “rapidly scaled up government oversight, as envisioned by the EU AI Act, which lacks clarity, resources and global authority.”

Frank outlined what this regulatory body would look like. His vision of a GAI self-regulatory body would consist of representation from the community of large AI labs that can be identified by the scale of AI infrastructure they control.  

“I envision the body would be comprised of representatives from government, industry leaders, experts and scholars from academia and law, and the representative from public and, possibly, the media,” he said. “It would be to assure neutrality, transparency, expertise and lack of collusion. I believe there’s enough material in non-profit governance to draw on for details.”

The body would enforce standards, but remain voluntary and not legally binding, although “failure to comply would likely trigger a referral to an appropriate government enforcement agency,” he said. “In addition to regulatory incentives, as with approved safety organizations like Underwriters Laboratory under OSHA, it would be almost impossible to market a GAI product without the organization’s stamp of approval. No insurance company would underwrite it and no public enterprise would buy it. Finally, the governed participants would have strong incentives to cooperate to minimize liability and gain public acceptance.”

Next story loading loading..