China’s latest AI workplace experiment has triggered intense debate online after examples emerged of former employees being digitally “kept on” after leaving their jobs. The most discussed case began in March, when a 24-year-old engineer at the Shanghai Artificial Intelligence Laboratory built a project called "colleague.skill" in just four hours, feeding it internal chats, emails and documents so it could mirror how staff made technical decisions and communicated. By 20 April, the project had attracted 15,500 GitHub stars, a level of attention reserved for only a tiny fraction of the platform’s projects. The idea was framed as a way to preserve corporate knowledge, but many Chinese users saw something far more unsettling: a worker’s digital double continuing the job after the human had gone.
The reaction sharpened when a second case surfaced involving Zhang Xuefeng, a well-known higher-education adviser who died in March. According to the material described online, a developer used his published articles, interviews, speeches and conversations to build an AI chatbot that allowed students and other users to keep talking to a version of him after his death. The project was reportedly created without the consent of Zhang’s family or his company, adding a sharper privacy and ethics dimension to the backlash.
Chinese media reports and tech coverage suggest the controversy reflects how quickly AI tools have moved from specialist use to ordinary office work in China. Popular workplace communication platforms now offer official plug-ins that make it easier to deploy open-source and commercial models, while some companies have gone further and require staff to use AI products, even tying monthly token use to performance assessments. OfficeChai said that by the end of 2025, 60 per cent of Chinese employees were using AI at least once a week, compared with 37 per cent in the United States.
The trend is not confined to China. ECNS reported in April that a gaming company in Shandong created a digital employee to carry on handling consultations, interview scheduling and presentation work after an HR specialist resigned, with the former worker’s consent. Separate reports from other outlets have described similar AI clones and “digital humans” being built from employees’ work histories, as well as celebrity-style avatars that can be queried for advice. The commercial appeal is obvious: companies can preserve know-how and keep operations running. But the backlash shows that many people are not yet comfortable with the idea of a bot inheriting a job, a voice or even a reputation.
Europe offers a very different legal backdrop. Under the EU’s GDPR, employee chats, emails and work documents are treated as personal data, and firms generally cannot reuse them without consent. That is why cases involving OpenAI in Italy and X in Ireland drew attention, even though neither ended in a penalty. For now, that framework means European workers are better insulated from the most aggressive versions of this practice, but the pace of AI development is fast enough that new grey areas are likely to emerge.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [3], [6]
- Paragraph 3: [2], [4]
- Paragraph 4: [2], [3], [6], [7]
- Paragraph 5: [1], [5]
Source: Noah Wire Services