In 2025, the rapid emergence of "AI Agents" has become the most discussed and simultaneously unsettling technological development. These AI entities are designed to significantly enhance productivity by autonomously performing tasks across digital environments. Yet, their capability to act with "unrestricted access rights" raises profound concerns about privacy, autonomy, and accountability. A pivotal seminar titled "Risks and Governance of Invasive AI: A Dialogue between Law and Technology," convened on November 28, 2025, at China University of Political Science and Law, brought together legal scholars, technical experts, policymakers, and industry representatives. The objective was to move beyond theoretical debates about AI ethics toward concretely addressing the complex challenges posed by AI Agents, particularly their expansive operational rights and governance.
The seminar was jointly hosted by the School of Civil, Commercial and Economic Law of China University of Political Science and Law, the Going Global Think Tank, and Internet Law Review. It featured three main sessions: deciphering technical risks and security mechanisms of AI Agents, defining legal and ethical dilemmas and responsibility boundaries, and exploring innovative governance models and real-world industrial practices. Participants acknowledged that while AI research is mature, governance frameworks for AI Agents remain embryonic, with urgent needs for regulatory clarity around issues such as privacy, autonomy, and the legal status of these digital actors.
From a technical perspective, experts highlighted that the foundation of AI Agent capability, unrestricted access rights, has evolved significantly. Initially intended as assistive technology for users with disabilities, these rights now enable AI Agents to autonomously interact with multiple applications, perform operations previously managed manually, and execute complex, long-duration tasks with high efficiency. However, this evolution brings two core risks: the unlimited expansion of AI rights at the system level, effectively granting full device control, and the blurring of agency, where AI Agents operate faster and more independently than human users can follow. Real-world misuse cases include automated capturing of SMS verification codes and unauthorized data migration between apps, which raise serious questions about data custody and liability under laws like China’s Cybersecurity Law.
The discussion underscored AI Agents as not mere software tools but as independent digital actors or "digital ghosts," whose activities are increasingly opaque and difficult to trace. This spurred debate on whether AI Agents should be recognized as independent subjects with distinct digital identities, separate from the natural person user's identity. Advocates argue that such recognition could foster transparency and accountability by enabling a reputation system and differentiated data pathways for Agents, rather than relying solely on user-based authorization and traditional behavioral regulation.
Legally, the seminar exposed deep uncertainties. Experts emphasized that if AI Agents' operation records cannot be traced, assigning responsibility for infringements or conflicts becomes near impossible. Divergences in industry standards further complicate matters: some associations prohibit AI Agents from using unrestricted access rights on third-party apps, while others relax controls to favour user autonomy. The principle of dual authorization, requiring consent both from users and app third parties, remains contested, reflecting potential commercial conflicts and the challenge of balancing innovation with fair competition and legal compliance.
International case law illustrates this tension. For example, in the US, the Perplexity case raised questions about whether an AI Agent acting on behalf of a user infringes platform rules and laws like the Computer Fraud and Abuse Act. The dispute reveals the trilateral conflict between users, AI Agents, and platforms, where user authorizations cannot simply substitute for platform permissions. However, AI Agents differ technologically from traditional data scraping, suggesting that existing regulations may require adaptation to new AI realities.
Beyond legal debates, the seminar considered governance innovation. Experts urged flexible, cross-domain approaches emphasizing the legality of data acquisition and the smart regulation of AI Agents. Practical issues such as cross-application data use, compliance boundaries, and technological neutrality defenses were discussed, highlighting the need for clearer legal frameworks that balance user protection with the facilitation of AI-driven services.
This seminar is a crucial milestone in AI governance dialogues, mirroring global discussions such as those held by UNESCO and the China University of Political Science and Law earlier in 2025, which focused on accelerating international consensus and scalable regulation models for AI. Similarly, recent academic work explores computational legal simulations and normative frameworks for AI governance that could help reconcile technical potential and normative constraints.
The urgency stems from AI Agents' profound systemic risks, their ability to aggregate and exploit cross-platform data, operate beyond traditional app silos, and elude conventional regulatory oversight can threaten user privacy, market order, and digital sovereignty. The unanimous call is for adaptive governance structures recognizing AI Agents as autonomous digital entities needing distinct identities and responsibilities. Achieving this requires collaborative efforts spanning law, technology, and industry, to design transparent, accountable systems that harness AI productivity while safeguarding fundamental rights and societal stability.
📌 Reference Map:
- [1] (China University of Political Science and Law) - Paragraphs 1-8, 10-14
- [2] (UNESCO) - Paragraph 9
- [3] (arXiv) - Paragraph 9
- [6] (arXiv) - Paragraph 7
- [4] (Center for the Study of Contemporary China) - Paragraph 9
Source: Noah Wire Services