On March 17, 2025, the Hoover Institution in Stanford, California, hosted a significant conference titled "Social Media and Democratic Practice," convening scholars and experts to explore the multifaceted impact of social media platforms and artificial intelligence (AI) on public discourse and democracy. The event, organised by Senior Fellow Morris P. Fiorina and supported by Hoover’s Center for Revitalizing American Institutions, gathered research findings and expert insights addressing both the potential benefits and risks posed by these digital tools in democratic engagement.
Past studies of legacy social media platforms such as Facebook, Twitter (now X), and YouTube had generally found minimal evidence that explicitly political content on these networks caused harm. Filter bubbles were largely absent, and misinformation or disinformation seemed to have little measurable effect, due primarily to the fact that most social media users paid limited attention to political content. However, recent years have seen the rise of new platforms and formats, including podcasts and nonpolitical content with indirect political consequences. For example, the 2024 Joe Rogan podcast episode featuring former US President Donald Trump reportedly garnered more than 50 million downloads, suggesting broad reach and influence beyond traditional political channels.
At the conference, Fiorina emphasised the need for renewed research focus, stating, “Today is a first step in measuring impact and reach of political content on social media, something academics have not paid enough attention to in recent years.” He highlighted the dual nature of social media and AI, which can simultaneously undermine and enhance democratic practice.
Among the research presentations was a study led by Tom Costello, assistant professor at American University, assessing the use of AI to counter conspiracy theories. His team developed an AI agent called DebunkBot, which engaged 761 participants who endorsed various conspiracy theories, including those surrounding the 9/11 attacks, the assassination of President John F. Kennedy, and COVID-19. The initial interaction with DebunkBot resulted in a 40 percent reduction in belief in these theories, with a sustained 20 percent decline observed after two months. Costello noted the challenges inherent in traditional debunking approaches, which are limited in scope, explaining to the gathering that “advanced AI could be the solution by generating persuasive arguments” that address a broad range of false claims effectively.
Jennifer Allen, incoming assistant professor at New York University, presented her research on vaccine misinformation and skepticism on Facebook. Her study examined the influence of social media post-2016 on vaccine uptake in the United States, with particular attention to the life of misinformation following the COVID-19 pandemic. Allen found that despite the implementation of Meta’s third-party fact-check programme, misinformation persisted, often reaching very large audiences. Importantly, she drew a distinction between flagged misinformation and unflagged “vaccine skeptical” content, highlighting that the latter, which often referenced adverse health events temporally linked to vaccination but lacking context or explanation, had a significantly stronger impact on reducing vaccine intent.
One striking example cited was a Chicago Tribune story about the death of a previously healthy doctor after vaccination, which received five times more views than all flagged vaccine misinformation combined. Allen explained, “The content not marked as misinformation on Facebook was found to be 50 times more impactful in reducing vaccine intentions than demonstrably false claims.” She also noted that some reputable news outlets inadvertently contributed to vaccine scepticism amid the fast-evolving scientific understanding during the pandemic.
Conference participants also addressed broader concerns regarding online discourse, including rising incivility fuelled by anonymity, as well as governmental and institutional attempts to censor speech. The example of COVID-19 related discourse was highlighted, where theories about the virus's origins and vaccine efficacy were suppressed initially but have since gained some scientific recognition.
The event concluded with a panel discussion featuring representatives from Meta, election law expert and Distinguished Visiting Fellow Benjamin Ginsberg, and free speech scholar and Senior Fellow Eugene Volokh. Moderated by Stanford Law School professor and Cyber Policy Center co-founder Nate Persily, the panel examined the complex challenges social media poses to both legal frameworks and democratic norms.
Through this conference, the Hoover Institution advanced the conversation on the evolving role of social media and AI in democracy, presenting new evidence and fostering dialogue on how these powerful technologies shape political engagement and public trust.
Source: Noah Wire Services