How Do Search Engine Robots Work?

Jan 10, 2007 • 9:02 am | comments (1) by twitter Google+ | Filed Under Search Technology
 

I have always had a thing for spiders. Not the creepy crawly kind, but the one made of bits and bytes who scour the web for new documents to index and download. They are so predictable but at the same time quite surprising you when you least expect it. How the hell did they do that or find that page? Many a webmaster has scratched their hand in disbelief at a crawler at one time or another. There is a thread on WebmasterWorld asking new questions about the various characteristics of a how search engine crawling technology works and the bare bones infrastructure of how a search engine goes from finding a page to ultimately deciding to list it in its search engine results. This is the nuts and bolts of the technology and also updating previously known information with new questions and answers.

So how do search engine robots work and what comprises them?

Spider : a robotic browser like program that downloads webpages. Crawler : a wandering spider that automatically follows links found on pages. Indexer : a blender like program that dissects webpages that are downloaded by spiders. The Database : a warehouse of the pages downloaded and processed. Search Engine Results Engine : digs search results out of the database

Pageoneresults takes it a step further in creating this thread to ask new questions about search engine robots for those that are not previously familiar.

1. Do robots accept cookies? 2. What happens if my site forces a cookie? 3. Do robots execute JavaScript functions? 4. Could I be doing something technically that is stopping a robot from indexing my site? 5. How do robots interpret my page? 6. In what order to robots index my page? What is the very first step that robot takes?
Continued discussion on WebmasterWorld - How Do Robots Work?

Previous story: Does a Domain Gain SEO "Power" Via Age Alone?
 

Comments:

seo company

02/09/2011 05:55 pm

search engine robots, sometimes called "spiders" or "crawlers", are the seekers of web pages. Robots collect links from each page. When arriving at your website, the automated robots first check to see if you have a robots.txt file. This file is used to tell robots which areas of your site are off-limits to them.

blog comments powered by Disqus