Microsoft has rolled out a public preview of "AI Performance" within Bing Webmaster Tools, a dashboard that reports when and where a publisher's pages are cited as sources in AI-generated answers across Microsoft Copilot, Bing's AI summaries and certain partner integrations. According to Microsoft's announcement, the tool surfaces metrics intended to show how often content is used to ground generative responses rather than traditional click- or rank-based signals. Sources: [2],[1]
The dashboard presents several purpose-built metrics: total citations during a chosen timeframe, the average daily number of unique pages cited from a site, sample "grounding queries" that reveal the prompts or search phrases that led the AI to cite content, page-level citation counts showing which URLs are referenced most, and trendlines that map citation activity over time. Microsoft framed the feature as an early piece of tooling for what it calls Generative Engine Optimization, designed to help publishers understand AI-driven discovery. Sources: [2],[1]
While the new report gives publishers visibility into citation frequency, it stops short of linking those citations to downstream commercial outcomes. Search Engine Land notes that Bing Webmaster Tools does not provide corresponding click-through or traffic data tied to AI citations, leaving questions about whether AI visibility generates measurable business value. Sources: [1],[7]
The feature builds on prior expansions to Bing Webmaster Tools' analytics capabilities. In recent updates Microsoft extended historical reach and filtering, adding 24 months of data, country and device filters and keyword trendlines, and introduced experimentation tooling such as A/B tests integrating IndexNow and Microsoft Clarity to help sites iterate on content and measure user behaviour. Together these developments reflect a push to adapt webmaster analytics for an increasingly AI-influenced search landscape. Sources: [4],[6],[3]
Microsoft suggests publishers can use AI Performance to confirm which pages are already being cited, to identify recurring topic areas in AI responses, and to improve under-cited pages by clarifying structure, adding evidence-backed information and keeping content current, guidance that echoes standard optimisation best practice while reframing it for generative scenarios. Sources: [2],[1]
Beyond product details, recent academic work highlights why citation visibility might matter in practice: research into user–AI interactions finds that aligning the level of expertise in AI responses with user expectations improves perceived quality of answers, whereas misalignment can degrade user experience. That suggests publishers whose content is used to ground well-calibrated AI responses could gain stronger trust or perceived authority among users encountering those answers. Sources: [5],[2]
Microsoft says it will continue to refine inclusion, attribution and visibility across both search results and AI experiences as the capabilities evolve. For publishers, the new report offers a diagnostic view of how content is surfacing inside AI-driven features, but publishers will need to combine those signals with other analytics to determine commercial impact. Sources: [2],[1]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2]
- Paragraph 2: [2]
- Paragraph 3: [1],[7]
- Paragraph 4: [4],[6],[3]
- Paragraph 5: [2]
- Paragraph 6: [5]
- Paragraph 7: [2]
Source: Noah Wire Services