advertisement
advertisement
The
White House framework opens the potential to support more national security work in the U.S. and allied countries, OpenAI said in the post.
In an example, the company explained how it applies
its technology to advance scientific research, enhance logistics, streamline translation and summarization tasks, as well as to study and mitigate civilian harm.
There's a contradiction
of events happeneing at OpenAI. The company's AGI Readiness lead is leaving, and members are being distributed to other teams as the company pledges a role to national security - a new development in
its approach as it moves from supporting consumers, brands and ad agencies to government entities.
Miles Brundage, head of OpenAI’s artificial general intelligence (AGI) readiness team, is
leaving the company, he wrote in a Substack post. He plans to start a non-profit or join one focused on
researching AI policy. His team moves under OpenAI’s new chief economist, Ronnie Chatterji, Brundage said. Other members are being reassigned.
Earlier this month, OpenAI said it
disrupted more than 20 operations and deceptive networks across the world that attempted to use its platform for malicious purposes since the start of the year.
These activities include
debugging malware, writing articles for websites, generating biographies for social media accounts, and creating AI-generated profile pictures for fake accounts on X.
"Threat actors continue
to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," the
company said.
The activity generated social-media content
related to elections in the U.S., Rwanda, India, and the European Union, and also includes efforts undertaken by an Israeli commercial company named STOIC that generated social-media comments about
elections in India.
OpenAI executives believe the company has a role in national security with changes that provide guardrails and policies around how AI can be
used.
The company already collaborates with DARPA to help cyber defenders better protect critical networks, and works with the U.S. Agency for International
Development, which uses ChatGPT to reduce administrative work for staff.
There is also an opportunity to further its collaboration with the U.S. National Laboratories, building on
a bioscience research partnership with Los Alamos National Laboratory.