
Last year, Meta began testing new safety controls in the form
of filters designed to prevent ads from appearing in risky Feed placements. The company wanted to give advertisers a better idea of what content would surround their ads on Facebook and Instagram.
Now, Meta is rolling out these tools for advertisers in English-speaking and Spanish-speaking markets, as well as AI-powered third-party verification via brand-suitability marketing firm Zefr,
which reports independently to brands how well the filters performed.
Artificial intelligence makes up the backbone of Meta’s review system, which “learns to classify content in
Facebook and Instagram Feeds” while adhering to industry standards set by Global Alliance for Responsible Media (GARM) and its “Suitability Framework.”
According to the tech
giant, the system works with text, video and images, effectively determining if the content meets Meta’s monetization policies. If not, the system declares the content ineligible to appear above
or below ads. When the content is deemed eligible, “the models assign it to a suitability category.”
advertisement
advertisement
Advertisers can now choose from three settings to
control the type of monetizable content that appears above and below an ad, including “expanded inventory” (the default setting), “moderate inventory” (for advertisers wanting
to exclude high risk content), and “limited inventory” (for advertisers wanting exclude medium and high risk content).
Meta's Vice President of Client Council and Industry
Trades Relations Samantha Stetson told AdAge that the company has already tested the program with over 25 advertisers and plans on bringing more third-party verification partners, including
DoubleVerify and Integral Ad Science.
The company also plans to develop the same filters for Reels and Stories.
Starting Thursday, Meta's brand safety controls will begin appearing as
an option in its ads manager platform, though it may take time for all advertisers to see and use them.