Advertisers and agencies rely on OpenAI ChatGPT and SearchGPT to support ad campaigns and product development, but the company in a blog post Thursday laid out its role in national security after the Biden administration released a national security memo for federal agencies on AI.
The change for OpenAI is a new development in the company’s approach as it moves from supporting consumers, brands and advertising agencies to government entities.
"At OpenAI, we’re building AI to
benefit the most people possible," the blog post explains. "Supporting U.S. and allied efforts to advance AI in a way that upholds democratic values is essential to our mission of ensuring AI’s
benefits are widely shared. We view the NSM as an important step forward in that effort."
advertisement
advertisement
The White House framework opens the potential to support more national security work in the U.S. and allied countries, OpenAI said in the post.
In an example, the company explained how it applies its technology to advance scientific research, enhance logistics, streamline translation and summarization tasks, as well as to study and mitigate civilian harm.
There's a contradiction of events happeneing at OpenAI. The company's AGI Readiness lead is leaving, and members are being distributed to other teams as the company pledges a role to national security - a new development in its approach as it moves from supporting consumers, brands and ad agencies to government entities.
Miles Brundage, head of OpenAI’s artificial general intelligence (AGI) readiness team, is leaving the company, he wrote in a Substack post. He plans to start a non-profit or join one focused on researching AI policy. His team moves under OpenAI’s new chief economist, Ronnie Chatterji, Brundage said. Other members are being reassigned.
Earlier this month, OpenAI said it disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year.
These activities include debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X.
"Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the company said.
The activity generated social-media content related to elections in the U.S., Rwanda, India, and the European Union, and also includes efforts undertaken by an Israeli commercial company named STOIC that generated social-media comments about elections in India.
OpenAI executives believe the company has a role in national security with changes that provide guardrails and policies around how AI can be used.
There is also an opportunity to further its collaboration with the U.S. National Laboratories, building on a bioscience research partnership with Los Alamos National Laboratory.
Currently, OpenAI's policies prohibit anyone from using its technology to harm people, destroy property, or develop weapons.
During the past several months, OpenAI developed a framework for assessing potential national security partnerships, including a set of values to guide this work.
Each potential use is evaluated through a formal process led by Product Policy and National Security teams for alignment with policies and values.