Preventing Google from Caching PDF Files

Sep 14, 2007 • 9:15 am | comments (2) by twitter | Filed Under Google Search Engine

In July, I wrote asking for ideas on how to prevent "View as HTML" links from appearing on PDF files. In other words, authors of PDF files don't want them to be cached.

A DigitalPoint Forums member seems to have found the way to do this without implementing robots.txt. After all, he wants his page to be crawled but he doesn't want the HTML to be available.

He shares the following tidbit:

A special case is PDF files that should be indexed, but not cached. There is no way to directly include meta information in a PDF file, but if security is enabled for a PDF file it will be treated as if the noarchive tag was specified. Security settings can be controlled using Adobe Acrobat (not the free Reader).

Apparently, therefore, it's possible. More information can be found in this article that allows you to control caching of your pages.

Has anyone had success with this method?

Forum discussion continues at DigitalPoint Forums.

This post was written on September 11th and scheduled for publication on September 14th.

Previous story: Does Word Position Matter On Keyword Phrases?
blog comments powered by Disqus