Google To Drop Any Support For crawl-delay, nofollow, and noindex in robots.txt

Jul 2, 2019 - 8:34 am 13 by

Googlebot Waving

Google posted this morning that they are going to stop unofficially supporting the noindex, nofollow and crawl-delay directives within robots.txt files. Google has been saying not to do this this for years actually and hinted this was coming really soon and now it is here.

Google wrote "While open-sourcing our parser library, we analyzed the usage of robots.txt rules. In particular, we focused on rules unsupported by the internet draft, such as crawl-delay, nofollow, and noindex. Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low. Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites' presence in Google's search results in ways we don’t think webmasters intended."

In short, if you mention crawl-delay, nofollow, and noindex in your robots.txt file - Google on September 1, 2019 will stop honoring it. They currently do honor some of those implementations, even though they are "unsupported and unpublished rules" but will stop doing so on September 1, 2019.

Google may send out notifications via Google Search Console if you are using these unsupported commands in your robots.txt files.

Like I said above, Google has been telling webmasters and SEOs not to use noindex in robots.txt:

Google told us this change would happen eventually:

Gary Illyes is to blame for this:

He said he is honestly sorry:

But Google looked and analyzed the impact and so a small impact, if any. In fact, they are not making the change for a few months and like I said above, may email those who will be impacted:

So now is the time to bulk up your audits to make sure that your clients are not depending on these unsupported commands in their robots.txt files.

Here is what Google posted in terms of noindex directive alternatives:

  • Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.
  • 404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google's index once they're crawled and processed.
  • Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google's index.
  • Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled usually means its content won’t be indexed.  While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.
  • Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google's search results.

Forum discussion at Twitter.

 

Popular Categories

The Pulse of the search community

Follow

Search Video Recaps

 
Google Core Update Rumbling, Manual Actions FAQs, Core Web Vitals Updates, AI, Bing, Ads & More - YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: March 18, 2024

Mar 18, 2024 - 4:00 pm
Google Updates

Google Urges Patience As The March 2024 Core Update Continues To Rollout

Mar 18, 2024 - 7:51 am
Google

Official: Google Replaces Perspective Filter With Forums Filter

Mar 18, 2024 - 7:41 am
Google Maps

Google Business Profiles Now Offers Additional Review After Appeal Is Denied

Mar 18, 2024 - 7:31 am
Google Maps

EU Searchers Complaining About Google Maps Features Changes Related To DMA

Mar 18, 2024 - 7:21 am
Google

Google Showing Fewer Sitelinks Within Search

Mar 18, 2024 - 7:11 am
Previous Story: Google Image Search Tests Sticky Image Preview Box