Meta, the owner of Facebook and Instagram, approved several AI-manipulated political adverts during India's ongoing election that disseminated disinformation and incited religious violence. This was revealed by a report from India Civil Watch International (ICWI) and the corporate accountability organization Ekō, shared exclusively with the Guardian.
The election, scheduled over six weeks from April to June 1, will determine if Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata Party (BJP) return to power for a third term. During Modi’s tenure, his government has faced accusations of fostering a Hindu-first agenda, reportedly intensifying persecution of India's Muslim minority.
The report detailed that out of 22 submitted adverts in multiple languages, 14 were approved by Meta, despite containing AI-altered images and hate speech targeting Muslims. Notably, these adverts featured inflammatory content, including false claims against political leaders and calls for violence. Although five adverts were rejected for violating Meta’s standards, the approved ones still breached Meta’s rules on hate speech and misinformation.
Meta's systems failed to classify the approved ads as political, bypassing the platform’s specific authorization process required for political content. This oversight contravened India’s election laws, which restrict political advertising within 48 hours of polling.
A Meta spokesperson reiterated the company’s commitment to enforcing its community standards and removing violating content. Meta has also expanded its local fact-checking network and supports 20 Indian languages to tackle misinformation.
The findings highlight ongoing challenges for Meta in managing disinformation and hate speech, particularly during critical election periods.