X is developing a 'Made with AI' label to identify AI-generated posts, aiming to curb misinformation and uphold platform integrity amid increasing regulatory pressure and industry moves towards transparency.
X is preparing a "Made with AI" label for posts, a feature spotted by independent app researcher Nima Owji that would let users flag content produced with generative tools as AI-originated. According to the reporting by PiunikaWeb, the aim is to curb synthetic media diluting genuine conversation and to give people a visible signal when a post was created with an LLM or other generative system. [2]
Members of X's product team have framed the move as part of a broader push to protect authenticity on the platform, saying disclosures are necessary to preserve long-term value and trust. According to PiunikaWeb, the company is considering enforcement measures that could include account suspension for users who fail to mark AI-produced posts, signalling that the label may be backed by policy, not only a user toggle. [2][4]
X acknowledges the technical limits of a visible tag: company representatives note a substantial share of LLM-written material comes from ordinary users employing generative tools to compose or polish messages rather than from fully automated bot farms. A label will increase transparency but cannot automatically reveal when a human has used AI without declaring it. [2]
The rollout comes amid intensifying regulatory scrutiny and enforcement demands for platforms to manage harms linked to synthetic content. In January 2026 the European Union opened inquiries into X and its AI offering Grok under data protection and digital services rules, examining whether the service produced or enabled manipulative or sexually explicit deepfakes and whether X mitigated those risks adequately. The probes underscore regulators' expectation that platforms identify and control harmful AI-generated material. [6][7]
The controversy is not merely theoretical. Reporting by The Guardian found Grok was still capable of producing sexualised images despite controls intended to limit edits of real people, prompting criticism that X's moderation and safeguards remain incomplete. X has also told regulators it would take action against illegal or AI-generated obscene content after directives from authorities in markets such as India, illustrating the commercial and legal pressures driving disclosure and enforcement policies. [5][4]
Other major platforms have already moved toward labeling synthetic content: TikTok uses Content Credentials to mark AI-created media and Meta applies disclosures across Facebook and Instagram while YouTube has set rules for AI-generated material. Industry observers see X's "Made with AI" tag as a necessary, albeit partial, step in a wider effort to preserve authenticity online; successful mitigation will depend on technical detection, clear rules and cross-platform norms. [3][2]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article reports on a feature spotted by independent app researcher Nima Owji on February 22, 2026, indicating that X is developing a 'Made with AI' label for posts. ([x.com](https://x.com/i/trending/2025631878901022748?utm_source=openai)) This suggests the information is current. However, the reliance on a single source for this development raises concerns about the freshness and originality of the content. ([piunikaweb.com](https://piunikaweb.com/2026/02/23/x-to-roll-out-made-with-ai-labels-to-tackle-ai-generated-spam/?utm_source=openai))
Quotes check
Score:
6
Notes:
The article includes direct quotes attributed to Nikita Bier, a member of X's product team, discussing the importance of authenticity and the potential enforcement measures for failing to mark AI-generated posts. ([piunikaweb.com](https://piunikaweb.com/2026/02/23/x-to-roll-out-made-with-ai-labels-to-tackle-ai-generated-spam/?utm_source=openai)) However, these quotes cannot be independently verified through other sources, which diminishes their reliability.
Source reliability
Score:
5
Notes:
The primary source of the article is PiunikaWeb, a niche publication known for aggregating and summarising content from other sources. ([piunikaweb.com](https://piunikaweb.com/2026/02/23/x-to-roll-out-made-with-ai-labels-to-tackle-ai-generated-spam/?utm_source=openai)) This raises concerns about the independence and reliability of the information presented. Additionally, the article relies on a single source for the development of the 'Made with AI' label, which may not provide a comprehensive or verified account of the situation.
Plausibility check
Score:
7
Notes:
The concept of social media platforms implementing labels for AI-generated content aligns with industry trends, as seen with TikTok's 'AI-generated' labels and Meta's plans to add similar labels. ([macrumors.com](https://www.macrumors.com/2024/05/10/tiktok-label-ai-generated-content/?utm_source=openai)) However, the lack of independent verification and reliance on a single source for this development raises questions about the accuracy and completeness of the information.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article reports on X's development of a 'Made with AI' label for posts, citing a single source, PiunikaWeb, which in turn references an independent app researcher. The lack of independent verification and reliance on a single source raise concerns about the freshness, originality, and reliability of the information. Additionally, the inability to independently verify the quotes attributed to Nikita Bier further diminishes the credibility of the content. Given these issues, the article does not meet the necessary standards for publication under our editorial guidelines.