Artificial intelligence (AI) tools are likely to provide enhanced opportunities for interference in the 2024 US elections, according to a bulletin from the Department of Homeland Security (DHS). The report highlights that foreign and domestic operatives could use generative AI to create fake video and audio content aimed at confusing voters and election staff. The bulletin warns of the potential for limited operations leading to disruptions in key battleground areas, although a large-scale attack is deemed less likely due to the decentralized nature of the voting system.
In a related context, U.S. Attorney General Merrick Garland announced the arrest of Chinese chemical company employees linked to the fentanyl precursor supply chain, marking the first U.S. prosecutions of firms manufacturing chemicals used to make fentanyl. This development was revealed at a news conference in Washington, D.C., on June 23, 2023.
The DHS bulletin, dated May 17, underscores that AI tools are becoming more sophisticated and accessible, raising concerns about their use to amplify false claims of electoral fraud. An example cited was a recent AI-generated robocall imitating President Joe Biden during the Democratic primary in New Hampshire, urging voters not to vote.
The bulletin also brings attention to past efforts by Chinese and Iranian operatives who created AI-generated content aiming to influence U.S. voters during the 2020 election cycle, although this content was not disseminated publicly at the time. The past four years have seen considerable advancements in AI capabilities, making the technology more effective with fewer samples.
The DHS stresses that both foreign government-backed operatives and domestic extremists are increasingly interested in using generative AI. Chinese actors, in particular, have expanded their influence operations, although they are not yet as effective as Russian operatives were in previous election cycles. Studies by Microsoft have shown Chinese government-run accounts engaging U.S. social media users to gauge opinions on topics like U.S. aid to Ukraine.
These developments signal a shifting landscape in election security, as technological advancements in AI may pose new challenges for safeguarding democratic processes.