AI Becomes Magnet For Foreign Adversaries In U.S. Elections

Artificial intelligence (AI) and chatbots such as ChatGPT have become a magnet for countries like Russia and Iran to create online deceptive campaigns and other content focused on U.S. presidential candidates.

On Friday, OpenAI said company developers have identified and taken down “a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035,” which used ChatGPT to generate content focused on commentary on candidates in the U.S. presidential election, which it then shared via social media accounts and website.s

OpenAI banned the accounts from using OpenAI services and continues to monitor for any further attempts to violate its policies. The operation does not appear to have achieved any “meaningful” engagement with audiences, and the majority of social-media posts identified received few or no likes, shares, or comments.

advertisement

advertisement

OpenAI wrote in the website post, that the attack apparently had two purposes -- to generate long-form articles and shorter social-media comments. The second intention was to create short comments in English and Spanish, which were posted on social media.

About one dozen accounts on X and one on Instagram were involved in this operation. Some of the X accounts posed as progressives, and others as conservatives. They generated some of these comments by asking ChatGPT models to rewrite comments posted by other social media users.

“The operation generated content about several topics: mainly, the conflict in Gaza, Israel’s presence at the Olympic Games, and the U.S. presidential election -- and to a lesser extent politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence,” the post explains. “They interspersed their political content with comments about fashion and beauty, possibly to appear more authentic or in an attempt to build a following.”

Meta on Thursday released a security report on how Russia is putting AI to work. In fact, Meta has identified six new covert influence operations from Russia, Vietnam and the United States.

A deceptive campaign from Russia published a large volume of stories resembling authentic articles from across the internet -- including mainstream media -- on its fictitious news websites.

“We assess that these stories were likely summaries of the originals generated using AI tools to make them appear more unique,” Meta wrote in the report. “The same campaign also posted AI-generated news-reader videos on YouTube.”

Russia also ran fictitious journalist personas across the internet, each with consistent profile photos that while often GAN-created (generative adversarial networks) were created to appear more convincing.

Meta said this continues a trend it has been monitoring since 2019. The report is lengthy and can be found here.

Next story loading loading..