OpenAI Reports Covert Disinformation Campaigns Using Its A.I. Tools

OpenAI announced on May 30, 2024, that it identified and disrupted five covert online campaigns using its AI technologies to manipulate public opinion globally. The campaigns were orchestrated by state actors and private entities from Russia, China, Iran, and Israel. The operations deployed OpenAI's tools to generate social media posts, translate and edit articles, and perform other tasks aimed at swaying political sentiments and influencing geopolitical events.

The identified operations include the Russian campaigns Doppelganger and Bad Grammar, the Chinese network Spamouflage, a pro-Iranian initiative from the International Union of Virtual Media, and an Israeli campaign by the firm STOIC. These campaigns targeted issues such as the Russia-Ukraine conflict and political situations in various countries.

Despite the advanced AI capabilities, OpenAI's principal investigator Ben Nimmo noted these efforts had limited impact, failing to gain substantial traction. The study highlights ongoing concerns about AI’s role in disinformation, as detailed in OpenAI's report.