The Indian government has introduced stronger legal protections for millions of citizens who share their photos and personal data with artificial intelligence (AI) applications, reflecting a growing regulatory response to the rapid expansion of AI technologies. Announced by the Minister for Electronics and Information Technology, Ashwini Vaishnaw, this new framework brings personal images, biometric data, and other sensitive information under the ambit of the Digital Personal Data Protection Act, 2023, alongside the recently implemented Digital Personal Data Protection Rules, 2025. These laws came into effect on November 13, 2025, establishing a comprehensive regime to regulate data privacy and enhance user control over digital information.

Central to this regulatory architecture is the newly formed Data Protection Board of India, mandated to oversee compliance, adjudicate complaints, and enforce penalties where data misuse occurs. The Board, comprising a Chairperson and four members appointed by a Search-cum-Selection Committee, symbolizes the government's commitment to strengthen individual rights over data and hold organisations accountable for how personal information is processed. This initiative particularly targets the burgeoning concerns around AI-generated content, including deepfakes and morphed images, which have posed serious risks such as misinformation, defamation, and reputational damage.

In addition to legislative measures, the government has engaged closely with social media platforms and intermediaries through a series of advisories issued between December 2023 and November 2025. These directives reaffirm platform responsibilities under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and the 2025 amendments that require prompt removal or disabling of access to unlawful content within 36 hours following notification by authorities. Further proposed regulatory amendments suggest mandatory labelling, watermarking, and traceability of AI-generated images and videos, aimed at enhancing transparency and enabling users to identify synthetic or manipulated content readily.

Responsibility for enforcement of offences involving AI misuse lies with state police and law-enforcement agencies, which retain authority to investigate and prosecute social media-related crimes. This decentralised enforcement model works alongside the national Data Protection Board to address both regulatory compliance and criminal misuse effectively.

Technological initiatives form a critical part of the government’s response to AI challenges. Under the IndiaAI Mission, launched in March 2024 to promote safe and responsible AI use, several projects have been earmarked under the Safe & Trusted AI pillar to detect and mitigate AI-generated content misuse. These include "Saakshya," developed collaboratively by IIT Jodhpur and IIT Madras, a framework for deepfake detection and governance; "AI Vishleshak," involving IIT Mandi and the Himachal Pradesh Directorate of Forensic Services, which detects audio-visual deepfakes and forged signatures; and a Real-Time Voice Deepfake Detection System from IIT Kharagpur. These cutting-edge technologies aim to bolster India's capabilities in identifying synthetic manipulations, thereby protecting citizens from potential reputational and financial harms.

The Digital Personal Data Protection Act, 2023 itself, represents a landmark in India’s data privacy landscape. It strikes a balance between protecting individual data rights and facilitating lawful data processing by organisations, drawing from global standards and adapting them to national needs. According to government summaries, the Act establishes specific obligations for data fiduciaries, rules for consent management, penalties for non-compliance, and safeguards for children’s data. The phased implementation allows various provisions to come into effect progressively, enabling a structured roll-out of protections.

Complementing this legislative framework, the Digital Personal Data Protection Rules, 2025, impose stricter requirements on companies operating in India, such as minimising data collection to what is strictly necessary, providing clear explanations for data usage, allowing users to opt out, and mandating breach notifications. This regulatory stance aligns India more closely with international frameworks like the European Union's General Data Protection Regulation (GDPR), emphasising enhanced control for users over their personal data.

The government's multi-pronged strategy, combining legislation, enforcement, technological innovation, and engagement with digital platforms, signals a robust commitment to tackling the risks posed by AI and protecting the privacy and security of Indian citizens in the digital age.

📌 Reference Map:

  • [1] BestMediaInfo - Paragraphs 1, 2, 3, 4, 5, 6, 7
  • [2] IndiaCode.nic.in - Paragraph 2, 8
  • [3] DSCI Summary - Paragraph 8
  • [4] EY Report - Paragraph 8
  • [5] IIT Kharagpur News - Paragraph 7
  • [6] Reuters - Paragraph 8

Source: Noah Wire Services