How Does robots.txt Behave on Domains and Subdomains?

Jun 18, 2007 • 9:36 am | comments (0) by | Filed Under Google Search Engine Optimization

In a Google Groups thread, a member has set up a subdomain on a domain and has duplicate content on the main domain:

in a nutshell, and is pointing to the same content

To avoid duplicate content issues, he's looking for a way to implement the robots.txt file so that the subdomain ( is not indexed by Google, but that the main domain itself ( is still indexed.

Bergy, one of Google's newest Webmaster Central representatives, offers some insights.

When a spider finds a URL, it takes the whole domain name (everything between 'http://' and the next '/'), then sticks a '/robots.txt' on the end of it and looks for that file. If that file exists, then the spider should read it to see where it is allowed to crawl.

In your case, Googlebot, or any other spider, should try to access three URLs:,, and The rules in each are treated as separate, so disallowing robots from should result in being removed from search results while remains unaffected, which does not sound like not something you want.

When in doubt, Bergy suggests that you use the Google robots.txt Analysis Tool to see if robots.txt is doing what the webmasters would typically expect.

Forum discussion continues at Google Groups.

Previous story: What's the Best Format to Name Images for Search Engines?
Ninja Banner
blog comments powered by Disqus