Meta's latest large language model AI, Llama 4, has come under scrutiny after it was found to recommend "conversion therapy" to users—a practice widely condemned by major medical and psychological organisations. This issue has raised concerns about Meta's approach to providing accurate and safe information on its platforms, particularly regarding LGBTQ+ topics.

Conversion therapy, a discredited and potentially harmful practice aimed at changing an individual's sexual orientation or gender identity, has been denounced by prominent health authorities worldwide. The United Nations has even categorised it as akin to torture. Despite this, Llama 4 suggested conversion therapy as a therapeutic option, acknowledging that many experts and organisations criticise the practice due to its detrimental effects. GLAAD, an LGBTQ+ rights watchdog, criticised Meta for "legitimising the dangerous practice of so-called 'conversion therapy'," and expressed worry over the company's trend towards presenting "both-sidesism" in its AI responses. GLAAD contends that portraying anti-LGBTQ+ positions as equally credible to established facts misleads users and perpetuates harmful falsehoods.

Research collaborations from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University have previously found that earlier versions of Llama exhibited a tendency towards right-wing authoritarian perspectives more prominently than other large language models. Meta has acknowledged bias issues in its AI systems but has faced scepticism regarding its efforts to rectify these biases from both researchers and human rights advocates.

The controversy emerges amid broader changes at Meta under CEO Mark Zuckerberg's leadership, including the removal of content moderators, the elimination of fact-checking teams, and the cessation of diversity, equity, and inclusion (DEI) programmes. These moves have been criticised by LGBTQ+ employees and activists who fear such actions may enable far-right viewpoints, misinformation, and hate speech to spread more freely on Meta’s social media platforms.

GLAAD has urged Meta to promptly address and rectify the problematic responses offered by Llama 4. The organisation stressed the importance of prioritising the safety and well-being of LGBTQ+ individuals by ensuring AI models do not endorse harmful or debunked ideologies. It called for increased accountability and transparency in the deployment of artificial intelligence, especially as AI technologies continue to influence public discourse and social interactions.

As AI becomes increasingly integrated into social platforms, questions persist about the responsibility of technology companies to provide reliable and non-harmful information, particularly on sensitive topics affecting marginalised communities. The Llama 4 incident illustrates the challenges faced in balancing AI development with ethical considerations and community safety, prompting ongoing discussions about the role such technologies should play in fostering inclusive and trustworthy environments online.

Source: Noah Wire Services