5 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article reveals that Google Search Console (GSC) data is 75% incomplete, making decisions based solely on it unreliable. The author analyzes data across multiple B2B sites, highlighting issues like privacy sampling, bot impressions, and the impact of AI Overviews on click metrics.
If you do, here's more
Google Search Console (GSC) data is significantly flawed, with about 75% of impressions filtered out, making reliance on this data for decision-making risky. A recent analysis of 450 million impressions across 10 B2B SaaS sites revealed that Google’s privacy measures and bot activities drastically distort the numbers. While the filtering rate for clicks has improved slightly, it still sits at around 38%. The method used for analysis involved comparing aggregate data from GSC with query-level data, allowing a clear view of how much information gets stripped away.
Several key changes over the past year have exacerbated the issue. Google’s introduction of “SearchGuard” in January 2025, which uses advanced signals to differentiate between human users and bots, has led to an increase in bot impressions. In March 2025, the rollout of AI Overviews caused a spike in impressions but a decline in clicks, indicating that these features are cannibalizing traditional search results. The data shows a strong correlation between the presence of AI Overviews and reduced click-through rates, complicating the picture for SEO professionals trying to assess their performance accurately.
Moreover, the rise of bot impressions, which have increased by 25% over the last 180 days, highlights the ongoing challenge of filtering out automated activity from genuine user engagement. Queries containing more than ten words and generating impressions without clicks are often indicative of bot activity. This trend suggests that scrapers are adapting to Google’s defenses, and the inflated impression counts could mislead SEO strategies. Overall, the current state of GSC data calls for a more nuanced approach to measurement, encouraging teams to develop robust frameworks to better interpret the filtered data and understand user behavior.
Questions about this article
No questions yet.