As AI regulatory efforts gather momentum in the U.S. and elsewhere, tech companies stand to benefit from developing guidelines that provide room for innovation, as well as guardrails.
With that backdrop, Meta and Microsoft yesterday announced that they have joined a group working on a framework of recommendations for guiding responsible creation, sharing and distribution of AI-generated, aka “synthetic,” media.
Responsible Practices for Synthetic Media: A Framework for Collective Action was launched in February by Partnership on AI, a four-year-old nonprofit with more than 100 partners from industry, media, academia and civil society organizations around the world.
PAI worked for a year with more than 50 organizations, including AI startups, content and social media platforms and news and human rights organizations, to refine the framework.
Participation by Meta and Microsoft represents a milestone for the framework, said Claire Leibowicz, head of AI and media integrity at PAI.
“Meta and Microsoft reach billions of people daily with creative content that is rapidly evolving,” she said in announcing the news. “These companies have both the expertise and the access needed to reach users all around the world and help them learn to discern AI-generated images, video, and other media as synthetic media’s prevalence grows. Their support of the Framework underscores a continued interest in designing interventions to minimize misinformation, ensure that users are informed about the content they’re seeing, and allow creative expression to flourish.”