European authorities have opened a formal inquiry into X’s AI assistant Grok after reports that the system generated and circulated large numbers of non‑consensual and sexualised deepfake images, including material that may involve minors, raising questions about the platform’s handling of illegal content and user rights. According to the European Commission and reporting by news outlets, the investigation will consider whether X met its duties under the bloc’s digital safety rules to prevent the dissemination of harmful material. Sources indicate the probe focuses on Grok’s operation within the X environment and on whether sufficient safeguards were in place to stop misuse. (Inspired by the headline at: [1])

French and British enforcement agencies have taken concrete enforcement steps as part of wider scrutiny of Grok’s outputs, with police searches of offices and summonses for senior company figures reported by media. Spain’s government has announced criminal proceedings against several major social platforms over alleged AI‑generated child sexual abuse content, while regulators in other jurisdictions, including Ireland, have opened data‑protection inquiries. These coordinated actions reflect an escalated response by national authorities across Europe to potential harms created by generative AI.

Spanish prosecutors have framed their investigation as a criminal matter, citing laws designed to protect children’s safety and mental health, and the country’s leadership has publicly condemned platforms believed to have enabled or failed to prevent the spread of sexualised images of minors. Industry reporting shows Spain invoked provisions of its public prosecution statute to pursue legal action against X alongside other major social networks, placing the case in a criminal, rather than purely regulatory, context.

Data‑protection authorities are examining whether X breached the EU’s General Data Protection Regulation through its treatment of personal data in AI training or output, and whether the company complied with the Digital Services Act’s obligations to tackle illegal content. Ireland’s Data Protection Commission has launched a GDPR inquiry into Grok after press accounts identified instances of non‑consensual imagery, while EU agencies are evaluating whether additional legal tools should be deployed to address harms that fall outside classic privacy violations.

X has publicly insisted it prohibits child sexual exploitation and non‑consensual intimate imagery and said it has introduced safety measures, yet regulators and prosecutors have described those steps as inadequate. Company sources have denied wrongdoing and characterised some enforcement actions as politically charged, even as reports note technical restrictions were implemented on Grok’s image‑editing features following the backlash. Meanwhile, U.S. state attorneys general and other international authorities have requested explanations about content moderation and the prevention of abusive AI outputs, signalling pressure beyond Europe.

The cross‑border wave of probes has prompted calls for more coordinated regulatory standards to govern advanced AI deployed on social platforms, with commentators and officials urging clearer accountability, transparency around training data and harmonised mechanisms to prevent rapid proliferation of harmful synthetic content. Industry analysts say the episode could accelerate adoption of unified international rules that balance protection of privacy and public safety with the need to preserve innovation.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services