X is preparing a "Made with AI" label for posts, a feature spotted by independent app researcher Nima Owji that would let users flag content produced with generative tools as AI-originated. According to the reporting by PiunikaWeb, the aim is to curb synthetic media diluting genuine conversation and to give people a visible signal when a post was created with an LLM or other generative system. [2]

Members of X's product team have framed the move as part of a broader push to protect authenticity on the platform, saying disclosures are necessary to preserve long-term value and trust. According to PiunikaWeb, the company is considering enforcement measures that could include account suspension for users who fail to mark AI-produced posts, signalling that the label may be backed by policy, not only a user toggle. [2][4]

X acknowledges the technical limits of a visible tag: company representatives note a substantial share of LLM-written material comes from ordinary users employing generative tools to compose or polish messages rather than from fully automated bot farms. A label will increase transparency but cannot automatically reveal when a human has used AI without declaring it. [2]

The rollout comes amid intensifying regulatory scrutiny and enforcement demands for platforms to manage harms linked to synthetic content. In January 2026 the European Union opened inquiries into X and its AI offering Grok under data protection and digital services rules, examining whether the service produced or enabled manipulative or sexually explicit deepfakes and whether X mitigated those risks adequately. The probes underscore regulators' expectation that platforms identify and control harmful AI-generated material. [6][7]

The controversy is not merely theoretical. Reporting by The Guardian found Grok was still capable of producing sexualised images despite controls intended to limit edits of real people, prompting criticism that X's moderation and safeguards remain incomplete. X has also told regulators it would take action against illegal or AI-generated obscene content after directives from authorities in markets such as India, illustrating the commercial and legal pressures driving disclosure and enforcement policies. [5][4]

Other major platforms have already moved toward labeling synthetic content: TikTok uses Content Credentials to mark AI-created media and Meta applies disclosures across Facebook and Instagram while YouTube has set rules for AI-generated material. Industry observers see X's "Made with AI" tag as a necessary, albeit partial, step in a wider effort to preserve authenticity online; successful mitigation will depend on technical detection, clear rules and cross-platform norms. [3][2]

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services