Why Does Google Try To Crawl When My Site Is Not Public?

Oct 20, 2008 - 8:24 am 0 by

A typical thread I see from time to time in the forums are from webmasters who see Googlebot trying to access their site, but their site is not publicly available.

For example, a Google Groups thread has a webmaster from davenjudy.org asking why does he see GoogleBot requests to his fubar.local.davenjudy.org when that page won't resolve?

The simple answer is because there may be links still pointing to that address.

In fact, in this case, Google's JohnMu said there is. A page has a link directly to that page, here is the link:

<a class="exlink" href="http://fubar.local.davenjudy.org" rel="nofollow">http://fubar.local.davenjudy.org</a>

With that link pointing to the page, GoogleBot will continue to crawl that link, from the linking page, until it is removed.

Is this an issue? John from Google explains, "When we find links, we'll try to follow them just in case there is something that is crawlable :). In your case, we aren't able to resolve DNS for the host name, so we leave it at that."

Forum discussion at Google Groups.

 

Popular Categories

The Pulse of the search community

Search Video Recaps

 
- YouTube
Video Details More Videos Subscribe to Videos

Most Recent Articles

Search Forum Recap

Daily Search Forum Recap: October 9, 2025

Oct 9, 2025 - 10:00 am
Google Updates

Google Search Ranking Volatility Heats Up October 7th & 8th

Oct 9, 2025 - 7:51 am
Google AdSense

Google AdSense Authorized Buyers To Replace Ad Networks Blocking Controls

Oct 9, 2025 - 7:41 am
Google

Google AI Mode Expands To 35 New Languages & 40 Countries

Oct 9, 2025 - 7:31 am
Google

Google Try On Adds Shoes & Expands To Australia, Canada and Japan

Oct 9, 2025 - 7:21 am
Google Maps

New Google Business Profiles Insights

Oct 9, 2025 - 7:11 am
 
Previous Story: Google Image Search October 2008 Update