Swedish investigative reporting has revealed that recordings captured by Meta's Ray‑Ban smart glasses are routinely routed to teams of annotators in Nairobi, where staff employed by outsourcing firms label footage intended to train the device’s artificial‑intelligence features. According to the newspapers behind the probe, material reviewed by Kenyan workers includes highly intimate scenes and other sensitive content filmed by wearers of the glasses. (Sources: Göteborgs‑Posten and Svenska Dagbladet, as reported by international outlets). [2],[5]

Workers described being asked to classify images, video and transcripts so the assistant can better recognise objects, interpret requests and translate languages, a task that requires humans to examine raw media. Several people interviewed said the clips they saw sometimes showed people using bathrooms, undressing or engaging in sexual activity, as well as exposed payment cards and private conversations. Those accounts are echoed in multiple reports summarising the investigation. [2],[3]

Journalists also carried out technical analysis suggesting the devices communicate with remote servers when their AI functions are invoked, meaning media must be transmitted off a user’s handset for the assistant to operate. Independent coverage noted recordings are captured when wearers activate features by pressing a button or using the "Hey Meta" voice prompt, after which interactions can be processed automatically or inspected by humans. [6],[3]

Privacy advocates warn that many consumers may not appreciate how far control over their data extends once it is absorbed into training systems. Kleanthi Sardeli of the Vienna‑based group None Of Your Business said: "Once the material has been fed into the models, the user in practice loses control over how it is used." Her comments underline concerns about transparency over when recording begins and what content is retained. [2],[3]

Meta has acknowledged that media used by the assistant can be transferred and processed globally and that it remains responsible for protecting user information under European law even if handling occurs outside the EU. The company declined to answer detailed questions about whether and how subcontractors such as Nairobi‑based annotation firms access specific recordings, saying only that media are processed in line with its terms of service and privacy policy. Independent reporting highlighted that staff who review material are bound by confidentiality agreements and barred from bringing recording devices into review facilities. [7],[2]

The revelations add to a broader debate about the ethics of outsourced data labelling and the limits of consent when powerful AI systems rely on human review. Industry commentators and digital‑rights campaigners quoted in the coverage call for clearer user notices, stronger safeguards to prevent exposure of sensitive moments and independent audits of how consumer AI products route and protect media. Without that, investigators warn, users may remain unaware that private moments captured by wearable devices can be seen and annotated far from where they were filmed. [5],[4]

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services