Scrape Bots Vs. Search Bots :: Fighting the Battle

Sep 12, 2006 • 7:06 am | comments (1) by twitter Google+ | Filed Under Search Engine Cloaking / IP Delivery
 

A Search Engine Watch Forums thread asks how can one prevent scraping of his site's content by a non-authorized spider, while not hurting his rankings in search engines?

This is a serious issue, serious enough that there was a session about this named The Bot Obedience Course at SES San Jose 2006. In that session, Bill Atchison from CrawlWall.com gave an excellent presentation.

Robert Charlton at the thread notes that Bill will be releasing a software tool that helps do just that. He said there is a "Beta version coming soon." The crawlwall.com/technology.html page has details of the technology developed by CrawlWall.com.

CrawlWall uses the following technology to secure your website and protect your content. All of the various methods are designed to work together in harmony to make sure that all of the spiders with permission and legitimate visitors get into your website without issue and all of the rogue crawlers get stopped and never gain admission.

Tactics such as dynamic robots.txt files, whitelist opt-in permissions, "second pass filters," ip banning or/and address banning, proxy blocking, creating certain obstacles, and a quarantine list for those uncertain IPs.

I am looking forward to seeing how it works in the real world.

Forum discussion at Search Engine Watch Forums.

Previous story: Search Industry Pays Respect to 9/11
 

Comments:

Rehan Mohammed

05/25/2007 10:37 am

I have seen GoogleBot Crawls on my site almost daily but when I log onto the Google.com.accouts, it says that it hs indexed my site 5 days back. Is it also a problem of Scrape Bots Cheers Rehan www.materialwords.com

blog comments powered by Disqus