Technology giant Google has unveiled a comprehensive safety-first roadmap aimed at protecting vulnerable user groups in India, including children, teenagers, and older adults, as the country increasingly embraces artificial intelligence (AI). Central to this strategy is the company’s introduction of on-device, real-time anti-scam tools powered by its Gemini Nano AI model, alongside new text watermarking technologies and digital literacy programmes designed to ensure AI is safer and more inclusive.

Google's new Scam Detection feature, initially available on Pixel phones, operates entirely on-device to analyse incoming calls from unknown numbers. It flags potential scams without recording audio, creating transcripts, or sending any data back to Google, thereby maintaining user privacy. The feature is off by default and emits a subtle beep to alert both call participants, with users retaining full control to disable it. This functionality targets scams, a growing issue in India’s fast-expanding digital economy, by providing real-time protection without compromising personal data.

While currently launched on Pixel 9 and later models, the feature is limited to English-language calls for now. The company is also partnering with leading Indian financial apps, such as Google Pay, Navi, and Paytm, to bolster protections against screen-sharing scams by displaying alerts during calls with unknown contacts where screen sharing is active. This signals Google's effort to address emerging scam tactics more comprehensively in the Indian market.

Adding to its AI safety toolkit, Google has broadened access to SynthID Detector and introduced an open-source version of its SynthID text watermarking tool through the Responsible GenAI Toolkit. These watermarking technologies embed identifiable markers in AI-generated images and audio, aiding partners and users in distinguishing synthetic content from real, which is critical to counter misinformation and preserve content authenticity.

Google's investment in India’s AI safety ecosystem extends beyond technology. The company has awarded a grant of ₹2 lakh to the CyberPeace Foundation to develop AI-driven cyber-defence mechanisms, enhance safer digital learning environments for young users, and advance responsible governance aligned with the IndiaAI Mission. Additionally, Google has provided $1 million to five leading think tanks and universities across the Asia-Pacific region to foster essential research and informed discourse around AI's challenges and opportunities.

This multi-faceted approach reflects a broader trend set by Google to balance AI innovation with ethical responsibility. Its internal safety protocols include automated red teaming to detect and address security vulnerabilities in AI models. Google also collaborates with industry partners to establish standards for content provenance, further enhancing transparency and trust in AI-generated media.

Complementing its AI safety initiatives, Google recently highlighted achievements under its Enhanced Play Protect program in India, which by January 2025 had blocked nearly 14 million potentially harmful app installations, covering half of all Android devices. The company has also engaged in partnerships with Indian cybercrime agencies and joined industry coalitions to promote safer internet practices and protect users from fraud and scams.

Beyond technology interventions, Google plans to launch the Learn and Explore Online (LEO) programme by December 2025. This initiative aims to empower teachers, practitioners, and parents with tools and knowledge to create age-appropriate online experiences and use parental controls effectively, further underlining Google's intent to protect vulnerable digital users through education and community engagement.

“The digital economy in India is booming, and we are committed to building AI systems that keep user trust intact as the country navigates its AI transition,” said Evan Kotsovinos, Vice President of Privacy, Safety and Security at Google. Preeti Lobana, Country Manager of Google India, echoed this vision, emphasising a 360-degree safety approach combining product protections, cloud-based safeguards, and digital literacy to empower users.

As AI continues to reshape technology landscapes, Google’s layered strategy in India, combining cutting-edge on-device AI safeguards, partnerships, educational efforts, and open-source tools, illustrates a robust model for responsible AI deployment focused on user protection, trust, and inclusivity.

📌 Reference Map:

  • [1] (IANS) - Paragraphs 1, 2, 4, 7, 8, 9, 10
  • [2] (TechCrunch) - Paragraphs 2, 3
  • [4] (LiveMint) - Paragraph 3
  • [5] (Google AI Safety) - Paragraph 5
  • [6] (Google Blog, Safer Internet Day) - Paragraph 6
  • [7] (Google Blog, AI Impact Summit) - Paragraph 9

Source: Noah Wire Services