The Karnataka government has approved an Artificial Intelligence-driven Social Media Analytics Solution (SMAS) to monitor online content, the cabinet deciding to allocate roughly Rs 67.2–67.26 crore for the project. According to media reports, the move was presented as a state effort to strengthen its digital-governance capabilities amid rapid growth in online content. (Sources: 2,3)
Officials say the SMAS will operate across social media platforms, websites and other digital channels in real time to identify fake news, hate speech, misleading posts, online abuse, cyber threats and harmful narratives. The system is also billed to detect and trace attempts at online recruitment by terror outfits and to help identify the origin of disinformation. (Sources: 2,5)
Speaking after the cabinet meeting, Law and Parliamentary Affairs Minister HK Patil said “Social media content will now be scanned by the government through SMAS.” He added the tool was necessary to keep track of misinformation and emergent digital threats. According to reports, when asked about legal authority to scan posts, he said there is “absolutely no bar” on screening or verifying content where manipulation or criminal intent is suspected. (Sources: 2,5)
Technical specifications and procurement will be settled by the Tender Approval Committee, after which the project will proceed through an e‑tendering process, officials have said. The government frames the system as a way to accelerate detection and response to risks that could affect public order and social harmony. (Sources: 2,5)
The deployment comes against the backdrop of wider state-level measures to regulate online misinformation. According to earlier reporting, the Karnataka cabinet has proposed a Misinformation and Fake News (Prohibition) Bill that envisages a regulatory authority to oversee such matters, and other coverage has outlined proposals for stringent penalties and special courts to deal with violations. (Sources: 6,4)
Civil liberties advocates and digital-rights commentators warn that automated monitoring of public communications raises risks of overreach and chilling effects on free expression unless robust safeguards, transparency and independent oversight are put in place. Industry observers note the technical limits of automated content moderation and the potential for algorithmic bias, underscoring calls for clear governance frameworks alongside any technical system. The state, however, maintains the system is essential to curb the misuse of online platforms. (Sources: 3,4,2)
No firm implementation timetable has been published; officials say the project will move forward once tender specifications are finalised and procurement is completed. Reporting indicates the administration intends the tool to become a central element of its strategy to counter misinformation and digital threats. (Sources: 2,5)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [2], [5]
- Paragraph 3: [2], [5]
- Paragraph 4: [2], [5]
- Paragraph 5: [6], [4]
- Paragraph 6: [3], [4], [2]
- Paragraph 7: [2], [5]
Source: Noah Wire Services