YouTube's chief executive Neal Mohan has defended the platform's growing reliance on artificial intelligence for content moderation, saying the technology improves "literally every week" and is essential to "detect and enforce on violative content better, more precise, able to cope with scale." According to the original report, Mohan made the remarks in a Time Magazine profile published as he was named the magazine's 2025 CEO of the Year.
His defence has provoked sharp criticism from prominent creators who say automated systems are terminating channels wrongly and sometimes overnight. In a December 10 video, creator MoistCr1TiKaL called Mohan's stance "delusional" and argued that "AI should never be able to be the judge, jury, and executioner" , a contention echoed by other creators who say automated enforcement has cost them livelihoods. According to the original report, the criticism intensified after a spate of high‑profile terminations and rapid appeal rejections.
The controversy centres on the platform's hybrid moderation infrastructure that processes hundreds of hours of uploads every minute and monetises creators through a Partner Program now supporting some 3 million monetised channels. Industry data shows YouTube's ad revenue and product shifts under Mohan have widened the stakes: the platform reported billions in advertising income and substantial growth in Shorts consumption, creating strong incentives to scale enforcement with machine learning.
Illustrative cases cited by creators include Pokemon YouTuber SplashPlate, whose channel was terminated on December 9, 2025 for alleged circumvention only to be reinstated the following day after public attention; YouTube later acknowledged the account was "not in violation" of its Terms of Service. Creators such as animation maker Nani Josh say they received appeal rejections within minutes despite TeamYouTube publicly stating on November 8 that appeals are "manually reviewed," a pattern that raises questions about how often human reviewers outvote or override automated decisions. According to the original report, multiple creators documented provisional reinstatements followed by subsequent terminations as additional automated checks were applied.
YouTube has publicly defended its hybrid model, saying automation is necessary to handle scale while humans review nuanced cases and train the systems. The company told creators in a November 13 statement that automation "catches harmful content quickly" but that humans are involved in complex decisions, and it identified education needs around policies on mass uploading and low‑value or scraped content. However, creators contend that documented instant rejections and inconsistent outcomes , across channels large and small , undermine faith in the promise of manual oversight.
The dispute comes as YouTube expands AI across both enforcement and creator tools. Mohan has promoted more than 30 AI‑powered features introduced in 2025 , from automatic editing and Shorts generation to dubbing and generative effects , arguing these will democratise production and create "an entirely new class of creators." Critics counter that the same tools can be used to mass‑produce low‑quality or appropriated content that gaming platform incentives may reward, intensifying the very moderation challenges automation is meant to solve. According to the original report, MoistCr1TiKaL warned that easier AI creation risks producing "AI slop" at scale.
The tensions feed into wider worries about YouTube's strategic trajectory. Commentary channels have accused the company of prioritising professionally produced media and short‑form content over independent long‑form creators, suggesting algorithmic and business incentives may be shifting the ecosystem. Industry reporting and creator analysis argue that YouTube's push to become more "advertiser‑friendly" and to invest heavily in AI is reshaping recommendations, view patterns and creator economics.
Corporate moves inside Google and YouTube reinforce the emphasis on AI. The company has reorganised product teams and offered voluntary buyouts as it pivots further into AI‑led products for viewers, creators and subscribers, a strategy Mohan has framed as necessary to maintain competitiveness even as it heightens the consequences of moderation errors for creators' incomes. The company said these investments will be applied more intentionally across viewer and creator products.
For creators, the practical implications are stark. YouTube allows one appeal per termination through YouTube Studio within a year of termination dates, and has piloted a reinstatement pathway for some creators to request new channels after a one‑year waiting period; the programme excludes copyright and serious Creator Responsibility violations. According to the original report, these measures acknowledge that enforcement standards have evolved since YouTube's early days, but they do not remove immediate financial and reputational harms experienced by those abruptly removed.
The episode underscores a broader industry dilemma: platforms must reconcile machine scale with the human judgement required for high‑stakes decisions. Mohan argues AI and humans form a "team effort" to protect the platform at scale, while creators and commentators call for clearer safeguards, greater transparency and , in some cases , legislative limits on automated terminations. As YouTube integrates more generative features and sharper enforcement, the balance it strikes will affect creator confidence, advertiser trust and the contours of online cultural production.
📌 Reference Map:
##Reference Map:
- [1] (PPC Land) - Paragraph 2, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 9, Paragraph 10
- [2] (TIME) - Paragraph 1, Paragraph 3, Paragraph 6
- [3] (TIME event coverage) - Paragraph 1, Paragraph 6, Paragraph 10
- [4] (Yahoo Finance) - Paragraph 8, Paragraph 9
- [5] (NDTV) - Paragraph 1
- [6] (Semafor) - Paragraph 10
- [7] (Medianama) - Paragraph 6
Source: Noah Wire Services