Generative AI tools such as ChatGPT, Claude, Gemini and Perplexity are increasingly surfacing links inside their answers, but the mechanics behind those references remain largely opaque. Practical Ecommerce said the major platforms have not disclosed their citation rules or offered meaningful optimisation guidance, even as evidence from studies and patents suggests they do not all reach the same sources in the same way. Some systems appear to lean on traditional search engines, while others draw from their own indexes or knowledge layers, creating very different routes to visibility.
That split matters because the engines do not all behave alike. Research cited by Agent Patterns says ChatGPT routes queries through Bing, Claude relies on Brave Search, Perplexity uses its own index and Gemini draws from Google's Knowledge Graph, while other analyses say ChatGPT can also favour publication partners regardless of external rankings. A separate study from Loamly described ChatGPT as sending multiple sub-queries to Bing and Perplexity as using a multi-stage reranking system, underscoring how retrieval logic, rather than broad content quality alone, helps determine what gets surfaced.
The kind of citation also changes the picture. Practical Ecommerce described grounded citations as those that shape the answer itself, while ungrounded citations act more like confirmation of what the model already "knows". It also flagged ghost citations, where a link appears without a named source, and invisible citations, where material appears to inform an answer without being credited at all. That matters because, according to an Ahrefs study cited by the article, a large share of retrieved URLs never get shown, suggesting that being used by the model and being visibly credited are not the same thing.
For brands, the practical takeaway is that AI visibility is becoming fragmented rather than universal. Yext said in a large-scale analysis that Gemini often favours official websites, while ChatGPT's results can vary by industry; Loamly likewise found weak correlation between visibility on one platform and another. BeVisibleIQ added that different content formats tend to win attention at different stages of the buying journey, with listicles stronger in consideration, comparison pages in evaluation, pricing guides in decision-making and how-to material during implementation.
That makes strategy less about chasing a single ranking and more about matching the way each system gathers and weighs information. Practical Ecommerce argued that direct and indirect exposure to prompts is still the priority, whether the model answers from training data, retrieved pages or a mixture of both. The broader lesson from the studies is that structured first-party content, up-to-date facts, authoritative lists, brand search demand and accurate listings all appear to improve the odds of being selected, but the exact balance differs from one AI engine to another.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [3]
- Paragraph 2: [3], [4]
- Paragraph 3: [2], [7]
- Paragraph 4: [5], [6], [2]
- Paragraph 5: [2], [3], [5], [6], [7]
Source: Noah Wire Services