Instagram is under intense scrutiny following a comprehensive investigation revealing that its platform has been actively recommending pro-Nazi, Holocaust-denial, and openly anti-Semitic reels to millions of users. The alarming report by Fortune exposes that this extremist content was not only widely circulated but appeared alongside advertisements from major American corporations such as JPMorgan Chase, SUNY, Nationwide Insurance, Porsche, and even the US Army. The investigation highlights how Instagram’s algorithm rewards engagement with such content, rapidly feeding users a stream of hateful and fascist-themed posts often disguised as edgy humour or ironic memes.

Central to the controversy was an account known as @forbiddenclothes, a now-deleted fashion brand page that posted fascist-themed memes garnering massive engagement. One notorious example featured a Nazi SS officer from the film Inglorious Basterds in a meme geared toward trivialising family political arguments. Despite a substantial number of critical comments, the post attracted many endorsements and acted as a gateway to further extremist material. Another reel with 1.4 million views propagated AI-manipulated footage portraying Adolf Hitler’s speech alongside fabricated claims identifying Jewish individuals in political and media positions, accompanied by anti-Semitic commentary praising Hitler. Holocaust denial was also rampant, with memes mocking the historical genocide and promoting racist conspiracy theories suggesting Jewish control over media and history.

The timing of these revelations is significant, occurring just months after Meta CEO Mark Zuckerberg drastically eased content moderation rules and disbanded the company’s US independent fact-checking programme. In January 2025, Zuckerberg announced the shift as part of a broader policy to “restore free expression” by prioritising less restrictive content rules, raising the threshold to remove hate speech, and replacing third-party fact-checkers with a community-driven notes system similar to that used by Elon Musk’s platform, X. This move, critics argue, has emboldened extremist propaganda and facilitated its broader dissemination on Instagram.

Former Meta director of responsible innovation Zvika Krieger told Fortune that the moderation systems became “intentionally less sensitive” after these policy changes, meaning content generating the most engagement, often inflammatory or extremist, was increasingly promoted by the algorithm. This shift coincided with growing concerns about online hate, with the Anti-Defamation League reporting a fivefold increase in harassment targeted at Jewish members of Congress on Facebook following the policy reversal. Meta claims to have removed millions of pieces of violating content in the first half of 2025, but the true proactive detection rate has been admitted to be below previously claimed figures, raising questions about the adequacy of enforcement.

The investigation also uncovered the financial incentives tied to spreading such content. Several content creators admitted to significant earnings from Nazi-themed posts and extremist material, with one UK meme-operator disclosing earnings upwards of £10,000 from merchandise sales linked to Hitler-themed memes. This economic motivation complicates efforts to curb hateful content, as offensive reels tend to grow audiences and generate rewards through Instagram's monetisation systems.

Advertisement placements alongside extremist content have drawn considerable criticism. Despite Meta’s statement that brands do not intend for their ads to appear next to harmful posts and rapidly removing flagged content, the proximity of trusted brands' advertising to deeply offensive material highlights ongoing challenges in content moderation at scale. A US Army spokesperson clarified that the military does not control where Meta places its ads, underscoring the automated nature of ad placement.

Meta’s overhaul of content moderation is part of a wider context of deregulating political and sensitive content. The company has relaxed hate speech rules particularly regarding topics like sexual orientation, gender identity, and immigration, aiming to align standards with what Zuckerberg termed “mainstream discourse,” in response to pressures from conservative factions and the anticipated second term of President Donald Trump. This relaxation has sparked worry among advocacy groups who fear these changes will lead to increased offline harm and a rise in abuse targeting vulnerable communities.

The European Commission has countered Zuckerberg’s assertions that EU data laws censor social media, clarifying that regulations only require the removal of illegal content to protect democracy and children. EU users will continue to benefit from independent fact-checking, which contrasts with the US approach where Meta’s community notes system replaces formal fact-checking mechanisms.

As Instagram and its parent company Meta navigate this contentious policy environment, striking the balance between free expression and preventing the spread of extremist content remains a formidable challenge. The recent revelations underscore the profound consequences of loosening content moderation rules and the urgent need for transparency, accountability, and more robust safeguards against hate speech and misinformation across social media platforms.

📌 Reference Map:

  • [1] (Daily Mail) - Paragraphs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
  • [3] (Washington Post) - Paragraphs 4, 5
  • [4] (Reuters) - Paragraphs 4, 5, 6
  • [5] (AP News) - Paragraph 6
  • [6] (AP News) - Paragraph 6
  • [2] (Reuters) - Paragraph 7
  • [7] (TechCrunch) - Paragraph 4, 5

Source: Noah Wire Services