Should Search Engines Be Immune to Copyright Infringement?

Jan 15, 2010 • 8:12 am | comments (3) by twitter Google+ | Filed Under Search Engine Industry News
 

A new bill in the UK Parliament named the Digital Economy Bill [HL] 2009-10 is proposing to give search engines, such as Google, a form of immunity against being sued over copyright infringement. It is a bit more complex than that but overall, if you want your content out of the search engines, block them - otherwise, you can't sue them over copyright law.

A Sphinn thread is pretty heated over the topic. Primarily between Michael Gray and Danny Sullivan. Let me quote some of the conversation:

Danny Sullivan in response to Michael Gray:

Yeah, yeah, simmer down there troll boy :)

So the actual article this is talking about from The Guardian says this proposal also says:

The presumption (of having an automatic license) may be rebutted by explicit evidence that such a licence was not granted. Such explicit evidence shall be found only in the form of statements in a machine-readable file to be placed on the website and accessible to providers of search engine services.

In other words, this gives robots.txt legal backing. You block that way, search engines can't index you. Fair enough. I mean, that's how things have worked for ages with the respected search engines. But if some rogue spider copied you, you couldn't easily claim a copyright violation because robots.txt had no force of law. Now, you could sue saying they'd been restricted and still indexed your content.

Michael Gray in response to Danny Sullivan:

being a troll boy ;-) and not a lawyer I may be missing something, but this seems pretty clear...

In other words, Google would be free to copy everything - but a publisherblocking search spiders with a robots.txt file would be taken as withholding that right. An explicit "fair use" provision, which Google often cites against copyright-abuse claims, does not exist in UK law.

Google can copy whatever it wants, unless you block it with robots, so if you want to retain you copyright then you do so by slitting your own throat for search engine traffic. That just doesn't make any sense for anyone ... except google.

The debate goes on and on in the thread, so if you are in a troll/rant mood or if you just find the topic interesting, do check it out.

Forum discussion at Sphinn.

Previous story: Daily Search Forum Recap: January 14, 2010
 

Comments:

eltercerhombre

01/15/2010 04:00 pm

I won't be commenting my thoughts on if SE's should be immune or not, but the robots.txt seems a bit funny to me. Let's say they index me and then I block them: how long should I wait until suing them for copyright infringement? Google is known to not to forget webpages very quick.

Michael Martinez

01/15/2010 06:08 pm

Unless your content is protected by UK law, you might have a long wait. I think, however, that if robots.txt is to be given the weight of law then the law should be written to mandate what robots.txt does and does not legally protect or obligate everyone to -- now we're probably going to see a hodge-podge of legislation and judicial decrees governing bits and pieces of robots.txt. That's not good for anyone or anything but the lawyers' billables.

Peter

01/17/2010 06:49 pm

I have a different opinion though. I would prefer the search engines to show something - maybe an icon before or after the display of the results in the SERPs - to tell the visitors that this site is using content which is Pirated or copied. I think it wont be possible for any search engine to make it easier.

blog comments powered by Disqus