A Swedish investigation has prompted renewed scrutiny of Meta’s Ray-Ban smart glasses after reports that workers at a Nairobi-based subcontractor viewed highly sensitive first‑person footage captured by wearers of the device. According to reporting by Svenska Dagbladet and Göteborgs‑Posten, contractors examined clips showing people in intimate situations, raising fresh questions about consent and data handling for wearable cameras.
An anonymous source speaking to the newspapers described scenes of people “going to the toilet, or getting undressed,” adding “I don’t think they know, because if they knew, they wouldn’t be recording.” Contractors told investigators they had also seen explicit sexual activity and financial information in material supplied for labelling.
Privacy advocates highlighted the broader implications of training artificial intelligence on such material. John Davisson, deputy director of enforcement at the Electronic Privacy Information Center, told Decrypt that wearers cannot provide consent on behalf of bystanders and warned that using footage containing identifiable faces, voices and other personal data to train models compounds privacy risks. “The wearer of the glasses cannot consent on behalf of all of the people they are encountering as they go through the world using these glasses,” he said. “You are compounding the privacy and data protection concerns, because you're taking people's personal information and using it to build your own model.”
Regulators have begun seeking clarity. The UK Information Commissioner’s Office told the BBC it will request information from Meta about how the company complies with data protection law and stressed devices that process personal data must allow transparency and user control. European authorities have also questioned whether the glasses’ recording indicators and other safeguards adequately protect bystanders.
Meta’s own documentation notes that content sent to Meta AI can be reviewed “automated or manual (human)” and that the company and its vendors may use user data to improve services, conduct research and ensure policy compliance. Meta has said privacy filters are applied to remove identifying information before human review, a claim that contractors and critics say does not square with the material they were shown.
The newspapers identified the Kenyan subcontractor involved in the work as Sama, whose employees in Nairobi manually annotate video, image and speech data for AI systems. Contractors interviewed for the investigation said they were instructed to describe and label every image and felt unable to question assignments for fear of losing their jobs. “There are also sex scenes filmed with the smart glasses. Someone is wearing them, having sex. That is why this is so extremely sensitive,” one contractor told reporters. “When you see these videos, it feels that way. But since it is a job, you have to do it,” another said. “You understand that it is someone’s private life you are looking at, but at the same time, you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”
The revelations have led to legal action. Plaintiffs in a new lawsuit allege Meta misled consumers by marketing the Ray‑Ban Meta glasses as “designed for privacy, controlled by you” while allowing sensitive footage to be reviewed by offshore contractors, and claim breaches of privacy law and false advertising. Meta has not issued a substantive comment on the litigation, and the company did not respond to Decrypt’s request for comment.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [6], [4]
- Paragraph 2: [6], [4]
- Paragraph 3: [6], [3]
- Paragraph 4: [7], [3]
- Paragraph 5: [3], [2]
- Paragraph 6: [6], [5]
- Paragraph 7: [2], [3]
Source: Noah Wire Services