Microsoft on Thursday joined a growing chorus of voices calling for regulation of the most powerful types of artificial intelligence, such as the language learning model GPT-4
The type of “highly capable” artificial intelligence is both impressive and somewhat unpredictable, the company writes in its new paper, “Governing AI: A Blueprint for the Future.”
“We need regulatory frameworks that anticipate and get ahead of the risks,” Microsoft writes. “And we need to acknowledge the simple truth that not all actors are well-intentioned or well-equipped to address the challenges that highly capable models present.”
The company specifically endorsed OpenAI CEO Sam Altman's recent recommendation that a new agency oversee and license the most advanced types of artificial intelligence. (Microsoft is an investor in OpenAI, and has incorporated GPT-4 into the Bing search engine.)
“Despite rigorous prerelease testing and engineering, we’ve sometimes only learned about the outer bounds of model capabilities through controlled releases with users,” Microsoft writes. “And the work needed to harness the power of these models and align them to the law and societal values is complex and evolving.”
The company's report comes as lawmakers and other officials are increasingly eyeing artificial intelligence.
Federal Trade Commission chair Lina Khan recently said the technology could give a boost to fraudsters, while also facilitating discrimination and privacy violations.
Other watchdogs have long sounded an alarm about bias in algorithms, such as those used in artificial intelligence systems. Last year, the Equal Employment Opportunity Commission noted that chatbots could facilitate discrmination against people with disabilities. For instance, the commission wrote, a chatbot algorithm might screen out all candidates who disclose gaps in their resumes, even if the gap was caused by a disability.
OpenAI itself warned last year that artificial intelligence could potentially fuel disinformation, and that ChatGPT “will sometimes respond to harmful instructions or exhibit biased behavior.”
Not everyone thinks a new agency is the best way to handle the potential pitfalls of artificial intelligence.
The digital rights group Electronic Frontier Foundation argues against a new government commission, writing that a licensing agency is likely to favor large or well-financed companies.
“Forcing developers to get permission from regulators is likely to lead to stagnation and capture,” legislative counsel Ernesto Falcon writes. “An army of lobbyists and access to legislators through campaign contributions and revolving doors will ensure that such an agency will favor only the most well-connected corporations with licenses.”
The organization says lawmakers should focus on how the technology is used, instead of the technology itself.
“If policymakers are worried about privacy, they should pass a strong privacy law,” Falcon writes. “If they are worried about law enforcement abuse of face recognition, they should restrict that use.”