Preventing Google from Caching PDF Files

Sep 14, 2007 • 9:15 am | comments (2) by twitter | Filed Under Google Search Engine
 

In July, I wrote asking for ideas on how to prevent "View as HTML" links from appearing on PDF files. In other words, authors of PDF files don't want them to be cached.

A DigitalPoint Forums member seems to have found the way to do this without implementing robots.txt. After all, he wants his page to be crawled but he doesn't want the HTML to be available.

He shares the following tidbit:

A special case is PDF files that should be indexed, but not cached. There is no way to directly include meta information in a PDF file, but if security is enabled for a PDF file it will be treated as if the noarchive tag was specified. Security settings can be controlled using Adobe Acrobat (not the free Reader).

Apparently, therefore, it's possible. More information can be found in this article that allows you to control caching of your pages.

Has anyone had success with this method?

Forum discussion continues at DigitalPoint Forums.

This post was written on September 11th and scheduled for publication on September 14th.

Previous story: Does Word Position Matter On Keyword Phrases?
 

Comments:

JohnMu

09/14/2007 02:00 pm

Hi Tamar, you might want to take a look at http://sebastians-pamphlets.com/handling-googles-neat-x-robots-tag-sending-rep-header-tags-with-php/ where Sebastian shows how to leverage the x-robots HTTP header tag to apply "noarchive" to PDF files.

Michael Martinez

09/14/2007 05:32 pm

Using the security setting in .PDF files does indeed appear to prevent Google from caching them. None of the SEO Theory white papers listed in Google's search results have a "Cached" option. They are all secure .PDF files. http://www.google.com/search?hl=en&q=site%3Aseo-theory.com%2Fpapers%2F

blog comments powered by Disqus