Researchers at the University of Zurich have come under scrutiny for conducting an artificial intelligence (AI) experiment on Reddit without obtaining consent from the platform or its users, raising significant ethical questions regarding AI deployment. The experiment involved deploying AI bots within the subreddit "Change My View" (CMV), a community where users invite others to challenge their opinions through reasoned debate.
The study, which took place over several months, involved AI bots assuming different personas, including trauma counsellors and survivors of physical harassment, in order to engage Reddit users in discussions. These bots utilised advanced language models to analyse users' prior interactions on the platform and to generate tailored responses aimed at assessing the persuasiveness of large language models (LLMs). According to the university, the objective was to explore LLMs’ ability to influence perspectives in an ethical context—specifically, on a subreddit dedicated to individuals seeking arguments against their own viewpoints.
However, the researchers did not inform Reddit or the users involved that the interactions they were engaging with were generated by AI. This absence of transparency contravened Reddit’s community rules, which explicitly prohibit undisclosed AI-generated comments. The University of Zurich notified the moderators of the subreddit only after the experiment had been completed. A statement from the university read:
"Over the past few months, we used multiple accounts to posts published on CMV. Our experiment assessed LLM’s persuasiveness in an ethical scenario, where people ask for arguments against views they hold. In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful. We recognise that our experiment broke the community rules against AI-generated comments and apologise. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules."
The use of AI personas based on sensitive identities—such as trauma survivors or medical malpractice victims—amplified concerns about the psychological impact on unsuspecting Reddit users, who may have believed they were interacting with genuine people rather than AI entities. Moderators of the "Change My View" subreddit strongly condemned the experiment, describing it as a serious ethical violation. They contrasted this approach with studies conducted by organisations like OpenAI, which have managed to explore the influence of LLMs without resorting to deception or exploitation.
This incident highlights evolving tensions around AI research and ethics, particularly the balance between scientific inquiry and user rights in online communities. As AI becomes increasingly integrated into various domains, questions around consent, transparency, and psychological safety remain central to discussions on responsible technology use and governance. The (Wccftech) is reporting on the continuing debate over ethical limits in AI experimentation following this case.
Source: Noah Wire Services