Beating the Scrapers To Google

Nov 27, 2012 • 8:51 am | comments (11) by twitter Google+ | Filed Under Google Search Engine Optimization

pingA typical issue for a new or not yet so popular content site is having their content scraped by a site that may look more authoritative than their site. When that happens, it is possible that Google will rank the your content on a site that is not yours - yes, the stolen content site.

Google has a scraper algorithm but that can work to hurt original content owners in the situation above. So what can you do?

A WebmasterWorld thread is having conversation just on that topic. They are discussing using services to help Google spot the content on your site before they spot it on a scraper site. Some of those techniques include:

  • Pinging Google through services like SubPubHubbub
  • Posting the content on social networking sites like Google+, Twitter and Facebook
  • Making the page live without the content and then once the search engine crawls it, launch the content on the page hoping the spiders come back sooner.
  • Use the Fetch as GoogleBot feature in Webmaster Tools
  • Try Google Blog Search's pinging service.

Share your ideas to this common issue.

Forum discussion at WebmasterWorld.

Previous story: Google Bacon Number Got It Wrong
blog comments powered by Disqus