Why Does Google Try To Crawl When My Site Is Not Public?

Oct 20, 2008 - 8:24 am 0 by

A typical thread I see from time to time in the forums are from webmasters who see Googlebot trying to access their site, but their site is not publicly available.

For example, a Google Groups thread has a webmaster from davenjudy.org asking why does he see GoogleBot requests to his fubar.local.davenjudy.org when that page won't resolve?

The simple answer is because there may be links still pointing to that address.

In fact, in this case, Google's JohnMu said there is. A page has a link directly to that page, here is the link:

<a class="exlink" href="http://fubar.local.davenjudy.org" rel="nofollow">http://fubar.local.davenjudy.org</a>

With that link pointing to the page, GoogleBot will continue to crawl that link, from the linking page, until it is removed.

Is this an issue? John from Google explains, "When we find links, we'll try to follow them just in case there is something that is crawlable :). In your case, we aren't able to resolve DNS for the host name, so we leave it at that."

Forum discussion at Google Groups.

 

Popular Categories

The Pulse of the search community

Search Video Recaps

 
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: January 30, 2026

Jan 30, 2026 - 10:00 am
Search Video Recaps

Search News Buzz Video Recap: Google Rank Volatility Heated, AI Overviews To Send Less Traffic, Opt Out Of It, Yahoo Scouts & Bing AI Performance Report

Jan 30, 2026 - 8:01 am
Google Updates

Google Search Ranking Volatility Super Heated January 29 & 30

Jan 30, 2026 - 7:51 am
Google Ads

Google Ads Tests Third-Party Endorsement Content On Search Ads

Jan 30, 2026 - 7:41 am
Google Search Engine Optimization

Google Search Adds Preferred Sources Help Docs

Jan 30, 2026 - 7:31 am
Google Maps

Google Business Profiles Review Appeals No Longer Delayed

Jan 30, 2026 - 7:21 am
 
Previous Story: Google Image Search October 2008 Update