I have always had a thing for spiders. Not the creepy crawly kind, but the one made of bits and bytes who scour the web for new documents to index and download. They are so predictable but at the same time quite surprising you when you least expect it. How the hell did they do that or find that page? Many a webmaster has scratched their hand in disbelief at a crawler at one time or another. There is a thread on WebmasterWorld asking new questions about the various characteristics of a how search engine crawling technology works and the bare bones infrastructure of how a search engine goes from finding a page to ultimately deciding to list it in its search engine results. This is the nuts and bolts of the technology and also updating previously known information with new questions and answers.
So how do search engine robots work and what comprises them?
Spider : a robotic browser like program that downloads webpages. Crawler : a wandering spider that automatically follows links found on pages. Indexer : a blender like program that dissects webpages that are downloaded by spiders. The Database : a warehouse of the pages downloaded and processed. Search Engine Results Engine : digs search results out of the database
Pageoneresults takes it a step further in creating this thread to ask new questions about search engine robots for those that are not previously familiar.