A parliamentary standing committee has warned that the rapid spread of artificial intelligence is creating novel hazards for India’s information ecosystem, urging stronger regulatory safeguards to curb the misuse of deepfakes and other synthetic media. According to the committee’s report, the country must build an AI environment that is safe, accountable and fair as technologies that generate realistic but fabricated content become more widespread. Inspired by headline at: [1]

The report flags particular concern about how AI-produced audio, video and images can be weaponised to spread falsehoods and manipulate public opinion, heightening risks to democratic discourse and individual reputation. It recommends that governance mechanisms be scaled up as deployment of generative tools expands across social platforms and services. Sources by paragraph:

Parliamentarians noted that existing legal obligations on intermediaries already demand systems to detect and take down unlawful material,but said those measures may be insufficient against increasingly sophisticated synthetic content. The committee highlighted recent regulatory moves that formally define AI-generated content and impose tighter takedown responsibilities on platforms. - Paragraph 2: [4], [7]

The review places the Digital Personal Data Protection Act, 2023 at the centre of proposed safeguards, arguing that its emphasis on consent, transparency and accountability can help govern data flows that underpin many AI systems. According to legal analyses and industry commentary,the Act creates duties for data fiduciaries and establishes an enforcement board that could play a role in oversight of AI-driven data processing. - Paragraph 3: [6], [5]

The committee also drew attention to implementation work under way: ministries have been finalising rules to operationalise the data protection law and technology-specific guidance is being discussed to ensure compliance by large platforms and AI developers. Industry advisers note that forthcoming subordinate rules will be critical to how firms adapt their practices and notifications for breaches or automated decisioning. - Paragraph 4: [2], [3]

Alongside statutory instruments, the report endorses the proposed IndiaAI Safety Institute as a technical and governance resource to develop safety standards, testing protocols and capacity building for civil servants and regulators. The committee suggested the institute could support both certification of models and research into detection methods for synthetic media. - Paragraph 5: [5], [1]

The panel urged a layered approach: strengthen platform accountability and rapid-response procedures for emergent deepfakes, deepen data-protection safeguards for users, and invest in technical detection and public literacy so citizens can better recognise manipulated material. It argued that only a combination of legal rules, institutional capacity and technical tools will mitigate harms without stifling innovation. - Paragraph 6: [1], [4]

Implementation challenges remain,including the pace at which subordinate rules are issued and how enforcement bodies will coordinate across ministries and the private sector. The committee’s recommendations stress that timely rulemaking and clear lines of responsibility will be essential if India is to balance the benefits of AI with protection against misinformation and abuse. - Paragraph 7: [3], [7]

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services