Don't Block Your 301 Redirects with a Robots.txt File

May 27, 2008 • 5:56 am | comments (4) by twitter Google+ | Filed Under SEO - Search Engine Optimization

A Google Groups thread has a very interesting discussion that is almost complete. The discussion takes you through the life cycle of a 301 redirect. Site owner moved from to, on a domain name sale, but wanted to retain his links, so set up a 301 redirect from .com to .info for a certain period of time.

Besides for the thread covering a ton of details that are critical to such a move, I wanted to highlight one point made by Googler, JohnMu. John said that you should not use "the robots.txt to block crawling while you have a 301 redirect enabled for the domain. By blocking crawling, you're effectively blocking the search engines from recognizing the redirect properly."

I wonder how many people do that because I never would have thought people do.

Besides for that, there is some discussion on how long the 301 should be in place before handing over the old domain to someone else. If you 301 the results for 3 weeks and then hand the old domain over to the new owner, if that owner drops the 301, will Google return the old links back to the old domain or keep them at the new domain? Some suggest keeping the 301 live for at least 6 months.

There are many tips in the thread for such a process including collecting as much linkage data you can from the previous domain. You can collect linkage data via Yahoo Site Explorer, Google Webmaster Tools, your web analytics, your own database scripts and more. This way you can go back to those sites and ask them to update your link to the new domain.

Forum discussion at Google Groups.

Previous story: Would You Buy a Link With a NoFollow Attribute?
blog comments powered by Disqus