Preventing Google from Caching PDF Files

Sep 14, 2007 • 9:15 am | comments (2) by twitter | Filed Under Google Search Engine

In July, I wrote asking for ideas on how to prevent "View as HTML" links from appearing on PDF files. In other words, authors of PDF files don't want them to be cached.

A DigitalPoint Forums member seems to have found the way to do this without implementing robots.txt. After all, he wants his page to be crawled but he doesn't want the HTML to be available.

He shares the following tidbit:

A special case is PDF files that should be indexed, but not cached. There is no way to directly include meta information in a PDF file, but if security is enabled for a PDF file it will be treated as if the noarchive tag was specified. Security settings can be controlled using Adobe Acrobat (not the free Reader).

Apparently, therefore, it's possible. More information can be found in this article that allows you to control caching of your pages.

Has anyone had success with this method?

Forum discussion continues at DigitalPoint Forums.

This post was written on September 11th and scheduled for publication on September 14th.

Previous story: Does Word Position Matter On Keyword Phrases?



09/14/2007 02:00 pm

Hi Tamar, you might want to take a look at where Sebastian shows how to leverage the x-robots HTTP header tag to apply "noarchive" to PDF files.

Michael Martinez

09/14/2007 05:32 pm

Using the security setting in .PDF files does indeed appear to prevent Google from caching them. None of the SEO Theory white papers listed in Google's search results have a "Cached" option. They are all secure .PDF files.

blog comments powered by Disqus