Anthropic, Google, Microsoft, OpenAI Launch Forum To Promote Safe, Responsible AI Development

Anthropic, Google, Microsoft, and OpenAI have formed a new industry body to promote the safe and responsible development of artificial intelligence (AI) systems.

The goal of the group -- Frontier Model Forum -- is to advance AI safety research, identifying best practices and standards, and facilitate information sharing among policymakers and industry.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation,” Kent Walker, president of global affairs at Google and Alphabet, wrote in a post. “We're all going to need to work together to make sure AI benefits everyone.”

Early in June, OpenAI published research on Frontier AI Regulations.

The company defined Frontier AI models as advanced AI and machine-learning models that are considered dangerous enough to pose "severe risks to public safety,” and are considered large language models (LLM) that surpass the capabilities of existing models. 

To address these challenges, three building blocks for the regulation of frontier models are needed.

Requirements include standard-setting processes to identify appropriate requirements for frontier AI developers, registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and mechanisms to ensure compliance with safety standards for the development and deployment of frontier.

Membership to the forum is open to organizations that develop and deploy frontier models, demonstrate strong commitment to safety through technical and institutional approaches, and contribute to advancing the Forum’s efforts like participating in joint initiatives and supporting developments.

Further work is needed on safety standards and evaluations to ensure that frontier AI models are developed and deployed responsibly.

The Forum will become one way for industry discussions and actions on AI safety and responsibility to take place.  

Initially, the forum will focus on three areas during the coming year, including identifying best practices, advancing AI safety research and facilitating information-sharing among companies and government.

In the coming months, the Frontier Model Forum will set up an advisory board.

The founding companies will set institutional arrangements including a charter, governance and funding with a working group and executive board to lead these efforts.

 

Next story loading loading..