The Indian government has for the first time placed AI-generated material squarely inside its intermediary regulation regime, imposing labelling, traceability and faster takedown duties on platforms ahead of the rules taking effect on 20 February 2026. According to reporting by the Times of India and Business Today, amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, require platforms to identify and tag content created or altered by algorithmic means and to ensure persistent provenance information is attached.

The revised rules adopt a broad working definition of “synthetically generated information”: audio, visual or audio-visual material produced or modified through computer resources that is presented in a way likely to be taken as indistinguishable from real persons or events. Exemptions are drawn for routine, good-faith edits and for educational or conceptual material that do not materially change underlying meaning, according to explanations of the changes.

Regulators have focused the definition on deception rather than ordinary enhancement, signalling that the primary concern is content likely to mislead or fabricate a person or occurrence. Reporting indicates the carve-outs for tasks such as transcription, translation and accessibility improvements are intended to preserve legitimate innovation while curbing misuse.

Intermediaries face expanded notice and reporting obligations. Platforms must inform users periodically that breaches can trigger account suspension or legal action and, where criminal conduct is suspected, report to authorities. Several outlets note that where intermediaries enable the creation or dissemination of unlawful synthetic content they must warn users that such activity could attract penalties under multiple statutes and that content removal, account suspension and disclosure to victims are among potential consequences.

The amendments impose explicit technical duties: intermediaries are required to deploy reasonable automated measures to prevent dissemination of unlawful synthetic material, and lawful AI-generated output must carry prominent disclosures. Visual items must display clear labels, audio must include prefixed notices, and platforms are expected to embed metadata or unique identifiers linking content to the hosting service where technically feasible. Platforms are barred from allowing the suppression or removal of those labels or provenance markers.

Significant social media intermediaries face heightened controls before publication. They must obtain user declarations about whether content is synthetically generated, use technical means to verify those declarations and, if AI-origin is confirmed, ensure labelling prior to going live. The new rules also shorten compliance windows: certain takedown orders must be executed within three hours and some grievance responses must be resolved within reduced timeframes, increasing operational pressure on platforms.

Taken together, the changes represent a shift towards traceability and rapid enforcement rather than an outright ban on synthetic media. Observers and industry coverage suggest compliance will require investment in detection tools, metadata infrastructure and faster moderation processes, while raising questions about technical feasibility and the risk of overblocking lawful creative uses of generative tools. Government and platform statements will determine how those tensions are managed in practice as the rules come into force.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services