Hong Kong’s privacy regulator has published a practical toolkit for schools and parents to manage and prevent incidents involving AI-generated deepfakes of children and young people, underscoring a growing focus on protecting minors in educational settings. According to the Office of the Privacy Commissioner for Personal Data (PCPD), the guidance outlines common types of deepfakes, typical abuse scenarios in schools, and step-by-step recommendations on preventing their creation, safeguarding personal data and managing incidents when they occur. The regulator attached an official statement to the toolkit when it was published. [1]
The toolkit is presented as a hands-on resource rather than a legislative change: it stresses preventative measures such as limiting unnecessary collection of pupil images and personal details, educating staff and students about the risks of manipulated media, and setting clear reporting and response procedures for schools and parents. The PCPD also advises practical technical and organisational steps to reduce the likelihood that imagery and other data will be repurposed to create harmful deepfakes. [1]
The release forms part of a broader regulatory approach in Hong Kong that has, to date, favoured guidance and enforcement of existing privacy rules over new, AI-specific statutes. In May 2025 the Privacy Commissioner, Ada Chung, told audiences that the current privacy framework is sufficient to address AI-related concerns, a position reflected in the PCPD's emphasis on toolkits and checklists rather than fresh legislation. [2]
That approach has precedent in the regulator's recent actions and findings. In April 2025 the PCPD issued a checklist for employers on employees' use of generative AI, urging organisations to adopt internal policies covering permissible AI use, privacy protection, bias mitigation and security. The deepfake toolkit for schools mirrors that pragmatic, guidance-led strategy aimed at embedding responsible practices across sectors. [5][1]
Regulatory interventions have also prompted private-sector changes. Following scrutiny from the PCPD, LinkedIn in October 2024 stopped using Hong Kong users' personal data to train its generative AI models, illustrating how enforcement and oversight can alter corporate data practices without immediate new legislation. The regulator’s recent compliance checks of 60 organisations found no contraventions of privacy law in their AI data practices, a result the PCPD presented as evidence that existing rules can be effective when applied and monitored. [4][7]
The PCPD’s broader enforcement and legislative context is relevant to the toolkit’s aims. Government figures reported a sharp decline in online doxxing , a roughly 90% drop in such cases since 2022, according to the Privacy Commissioner’s briefing to the Legislative Council , reflecting both legal overhaul and active regulation in the personal-data sphere. At the same time, officials have signalled sensitivity to business concerns about penalties and implementation, having discussed a phased rollout of privacy law revisions earlier in 2025 to ease transition pressures on industry. [3][6]
Taken together, the deepfake guidance sits within a regulatory toolkit that combines education, oversight and targeted intervention. The PCPD frames the toolkit as a practical step schools and parents can use now to reduce harm, while continuing to rely on existing privacy laws and supervisory activity to address emerging AI-related risks. [1][2][5]
📌 Reference Map:
##Reference Map:
- [1] (MLex) - Paragraph 1, Paragraph 2, Paragraph 8
- [2] (MLex) - Paragraph 3, Paragraph 8
- [5] (MLex) - Paragraph 4, Paragraph 8
- [4] (MLex) - Paragraph 5
- [7] (MLex) - Paragraph 5
- [3] (MLex) - Paragraph 6
- [6] (MLex) - Paragraph 6
Source: Noah Wire Services