Volunteer editors on Wikipedia have voted to prohibit the use of large language models in article creation, citing fears of AI-driven errors and unverified content, while allowing limited, human-verified AI assistance for translation and editing.
Volunteer editors on Wikipedia have voted to bar the use of large language models to write or rewrite article content, a move that reflects growing anxiety within the community about AI-driven errors and unverifiable claims. According to The Guardian and Semafor, the new guideline prohibits editors from employing generative AI to produce prose for encyclopedia entries, while still permitting tightly constrained uses of tools for translation and basic copyediting so long as humans verify changes and no new information is introduced.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article reports on Wikipedia's recent policy change banning AI-generated content, with the earliest known publication date being March 27, 2026. ([theguardian.com](https://www.theguardian.com/technology/2026/mar/27/wikipedia-bans-ai?utm_source=openai)) The content appears original, with no evidence of prior publication. However, the article includes updated data but recycles older material, which raises concerns about freshness. Additionally, the article includes updated data but recycles older material, which raises concerns about freshness. Given these factors, the freshness score is reduced to 8.
Quotes check
Score:
7
Notes:
The article includes direct quotes from Wikipedia's new policy statement. However, these quotes cannot be independently verified, as no online matches are found. This lack of verifiability raises concerns about the accuracy and reliability of the information presented. Given this, the quotes check score is reduced to 7.
Source reliability
Score:
8
Notes:
The article is sourced from Decrypt, a reputable news outlet. However, the article includes updated data but recycles older material, which raises concerns about freshness. Additionally, the article includes updated data but recycles older material, which raises concerns about freshness. Given these factors, the source reliability score is reduced to 8.
Plausibility check
Score:
9
Notes:
The claims made in the article are plausible and align with known developments in AI and content moderation. However, the lack of independently verifiable quotes and the recycling of older material raise concerns about the accuracy and reliability of the information presented. Given these factors, the plausibility check score is reduced to 9.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents information on Wikipedia's new policy banning AI-generated content. However, the lack of independently verifiable quotes, reliance on a single source, and recycling of older material raise significant concerns about the accuracy, reliability, and freshness of the information presented. Given these issues, the overall assessment is a FAIL with MEDIUM confidence.