Google: If You Can't Block No Result Found Internal Search Results Then Block All Search Results

Apr 18, 2023 • 7:41 am | comments (2) by | Filed Under Google Search Engine Optimization

Google Robot Vacuum Pages

Google's John Mueller went on a bit of a rant on Reddit on the topic of having your internal search result pages crawlable and indexable. As you know, in general, Google does not want to index search results but John said, this is especially true for search result pages that have no results found.

In short, John said, "if you can't select which search results pages should be indexable, you should block all of them from being indexable - use robots.txt disallow or noindex robots meta tag."

The rant is kind of fun to read, so here it is:

Unfortunately, lots of CMSs, hosting platforms, ecommerce platforms, etc still don't robot or noindex search results pages by default. We've given that guidance for probably over a decade. Especially if a search results page returns no results, there's no reason for it to be indexable. And even for other search results pages, it's a good practice to either block them all, or to only allow a hand-selected set to be indexed (eg, known product-type queries, where the results are more like category pages). If you can't limit the indexable search results pages, I would strongly recommend noindexing or robotting *all* of the search pages. It's still a regular occurrance that we see sites spam the search results with open search results pages -- it doesn't take much work to prevent, and cleaning it up afterwards is such a hassle.

In 2007, Google told webmasters to block internal search results from being indexed. The original guideline reads "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines." Now it reads "Use the robots.txt file on your web server to manage your crawling budget by preventing crawling of infinite spaces such as search result pages."

Then ten years later, Google's John Mueller explained why Google doesn't want your search result pages in its index. He said, "they make infinite spaces (crawling), they're often low-quality pages, often lead to empty search results/soft-404s." Later he explained it is a watering down issue and these types of pages can result in soft 404 responses in Search Console.

Forum discussion at Reddit.

Previous story: Google Expands Return / Shipping Details In Search Results & New Search Console Reporting
Ninja Banner
blog comments powered by Disqus