Conservative activist Robby Starbuck has filed a defamation lawsuit against Meta, accusing the social media company’s artificial intelligence chatbot of disseminating false statements about him. Among the erroneous claims propagated by Meta's AI was an assertion that Starbuck participated in the January 6, 2021, riot at the U.S. Capitol, an event he denies, stating he was in Tennessee at the time.

Starbuck, who is known for his criticism of corporate diversity, equity, and inclusion (DEI) programmes, discovered the false information in August 2024 while campaigning against DEI initiatives at Harley-Davidson. He said the defamatory content was first brought to his attention when a motorcycle dealership unhappy with his stance posted a screenshot of Meta’s AI-generated claims in an attempt to undermine him. "This screenshot was filled with lies. I couldn’t believe it was real so I checked myself. It was even worse when I checked," he said in a post on X.

The lawsuit, filed in Delaware Superior Court on Tuesday, seeks damages exceeding $5 million. According to the complaint, Starbuck experienced a sustained barrage of false accusations damaging to his reputation and personal safety. Beyond the claim about his alleged participation in the Capitol riot, Meta’s AI also falsely accused him of Holocaust denial and even claimed he had pleaded guilty to a crime, despite Starbuck asserting he has never been arrested or charged.

Following the discovery of these inaccuracies, Starbuck reached out to Meta’s executives, including legal counsel, in a bid to correct the AI’s output and implement measures to prevent further harm. He requested a retraction of the false information, an investigation into the cause of the errors, safeguards to avoid recurrence, and transparent communication with users of Meta’s AI services. The complaint alleges Meta was reluctant to make meaningful changes and allowed the misinformation to spread for months despite being informed of the errors. Eventually, Meta’s response was to remove Starbuck’s name from the chatbot's responses, a move Starbuck characterised as insufficient because the AI still associates his name with news stories, enabling further false inquiries.

In response to the lawsuit, a Meta spokesperson stated, "as part of our continuous effort to improve our models, we have already released updates and will continue to do so." Joel Kaplan, Meta's chief global affairs officer, addressed the issue on X, describing the situation as "unacceptable" and apologising for the AI’s inaccurate results relating to Starbuck. Kaplan said he was collaborating with Meta’s product team to investigate the problem and explore solutions.

The case adds to a growing number of lawsuits against AI platforms for the spread of misinformation. In 2023, a conservative radio host in Georgia filed a defamation suit against OpenAI after ChatGPT falsely alleged he committed fraud and embezzlement.

James Grimmelmann, professor of digital and information law at Cornell Tech and Cornell Law School, noted that there is "no fundamental reason why" AI companies could not be held liable for defamatory outputs. He explained that disclaimers alone are insufficient to shield companies from responsibility and compared the challenges in AI defamation cases to similar disputes involving copyright infringements. Grimmelmann acknowledged the difficulty in preventing AI from producing misleading or false content but emphasised the importance of accountability in such situations.

The (PennLive.com) is reporting on this developing legal matter, which highlights the ongoing tensions between emerging AI technologies and the legal frameworks governing misinformation and defamation.

Source: Noah Wire Services