The Indian government has overhauled its intermediary rules to tighten oversight of content produced or altered by artificial intelligence, imposing mandatory labeling and much faster takedown obligations on large online platforms. According to coverage of the new measures, firms that host user material will have an explicit duty to mark synthetic audio, visual and audiovisual items so audiences can distinguish manipulated material from original content. (Sources: Times of India, Business Today)
The amendments, to take effect on 20 February 2026, shrink the window for platforms to remove content deemed unlawful by competent authorities to three hours in most cases and to two hours for especially sensitive categories such as non‑consensual intimate imagery and deepfakes. Industry reporting says the change represents a substantial acceleration from the previous 24–36 hour compliance period. (Sources: Times of India, India Today)
The rules formally define “synthetically generated information” as audio, visual or audiovisual material created or altered to appear authentic, bringing such material squarely within the scope of the IT Rules’ unlawful content provisions. Government notices and reporting also make clear that routine camera edits, accessibility adjustments and bona fide educational or design work are excluded from that definition. (Sources: Business Today, Onmanorama)
Regulators are demanding not only visible labelling but, where technically practicable, embedding persistent metadata and unique identifiers to support traceability. Draft proposals earlier called for more prescriptive coverage of labels, but the finalised amendments soften some of those requirements while retaining obligations for platforms to obtain disclosures from users and to prevent removal of labels or identifiers once applied. (Sources: Times of India, Times of India (business))
Enforcement is being stepped up: platforms that fail to comply risk forfeiting safe harbour protections that shield intermediaries from liability for user‑posted material, and the rules instruct companies to demonstrate due diligence in monitoring, detection and removal. The government has also encouraged the use of automated tools to curb the spread of illegal, deceptive or sexually exploitative synthetic content. (Sources: Business Today, Onmanorama)
Practical adjustments accompany the tougher deadlines. The ministry has allowed multiple designated officers in populous states to issue takedown directions to avoid bottlenecks, and the regulations include carve‑outs for minor automated edits applied by smartphones. Observers say the package reflects a broader push to balance online safety, accountability and technical feasibility as AI‑generated material becomes more widespread. (Sources: Times of India, Onmanorama)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2], [4]
- Paragraph 3: [3], [7]
- Paragraph 4: [5], [6]
- Paragraph 5: [3], [7]
- Paragraph 6: [2], [7]
Source: Noah Wire Services