In a Google Groups thread, a member has set up a subdomain on a domain and has duplicate content on the main domain:
in a nutshell, domainB.domainA.com and domainB.com is pointing to the same content
To avoid duplicate content issues, he's looking for a way to implement the robots.txt file so that the subdomain (domainB.domainA.com) is not indexed by Google, but that the main domain itself (domainA.com) is still indexed.
Bergy, one of Google's newest Webmaster Central representatives, offers some insights.
When a spider finds a URL, it takes the whole domain name (everything between 'http://' and the next '/'), then sticks a '/robots.txt' on the end of it and looks for that file. If that file exists, then the spider should read it to see where it is allowed to crawl.
In your case, Googlebot, or any other spider, should try to access three URLs: domainA.com/robots.txt, domainB.domainA.com/robots.txt, and domainB.com/robots.txt. The rules in each are treated as separate, so disallowing robots from domainA.com/ should result in domainA.com/ being removed from search results while domainB.domainA.com/ remains unaffected, which does not sound like not something you want.
When in doubt, Bergy suggests that you use the Google robots.txt Analysis Tool to see if robots.txt is doing what the webmasters would typically expect.
Forum discussion continues at Google Groups.