7 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article explores how "best X" lists are referenced in ChatGPT responses, revealing that updated lists often rank highly in AI-generated recommendations. The research categorizes 750 prompts across software, products, and agencies, highlighting the importance of fresh content and the variability of sources. It also notes that many cited websites have low authority, raising questions about quality in AI references.
If you do, here's more
OpenAI CEO Sam Altman recently announced over 800 million weekly users of ChatGPT. The article explores how "best X" lists, frequently updated, dominate the sources used by ChatGPT. Research shows that brands listed highly in third-party rankings are more likely to be featured in the AI's responses. The author analyzed ChatGPT outputs from 750 top-of-the-funnel prompts across software, products, and agency recommendations. Categories included queries like “best CRM software for enterprise” and “best entry-level DSLR.” Despite some differences in search behavior between Google and chat platforms, the study found that high-ranking lists correlate with increased visibility in AI responses.
The analysis categorized mentions into first-party and third-party sources, with over 10,000 URLs examined. Manual categorization ensured accuracy, particularly since many lists don’t clearly state the number of items. The findings indicated that landing pages are more common in software and agency mentions, while general blog posts are more effective for product recommendations. Interestingly, 79.1% of analyzed pages were last updated in 2025, with 26% refreshed in the past two months. Keeping content current appears essential for both traditional SEO and AI visibility, as fresher articles tend to perform better.
Concerns about the legitimacy of sources surfaced in the research. Many cited pages lack credibility and show minimal organic visibility. Notably, 28% of the most-cited sources by ChatGPT have no organic traffic at all. This raises questions about the quality of the information being presented in AI responses. The research aims to establish a benchmark for content types that can influence visibility in AI outputs, highlighting the importance of maintaining high-quality, up-to-date resources.
Questions about this article
No questions yet.