More On Latent Semantic Indexing

Feb 4, 2005 • 8:57 am | comments (2) by twitter Google+ | Filed Under Search Technology
 

Yesterday I wrote an entry named Latent Semantic Analysis (LSA) - Crawl into the Google Algorithm?, where I discussed how the current theories behind the Google SERP changes have to do with a new algorithm shift for Google. Now many believe that this has a lot to do with Latent Semantic Indexing. So now, as a SEO, if you haven't already, its time to read up on all the papers on this topic. I, Brian posted a new thread with resources to papers on the topic, he thanks SEW Moderator Marcia for the help with the papers. I'll list links to those papers below. Then Ammon Johns posts a quote from one source that really does a great job summarizing the topic. In addition, he posts to a thread on this topic started in 2002 at Cre8asite Forums named The Semantic Web.

Here is the snippet Ammon quoted in the SEW thread:

Regular keyword searches approach a document collection with a kind of accountant mentality: a document contains a given word or it doesn't, with no middle ground. We create a result set by looking through each document in turn for certain keywords and phrases, tossing aside any documents that don't contain them, and ordering the rest based on some ranking system. Each document stands alone in judgement before the search algorithm - there is no interdependence of any kind between documents, which are evaluated solely on their contents.
Latent semantic indexing adds an important step to the document indexing process. In addition to recording which keywords a document contains, the method examines the document collection as a whole, to see which other documents contain some of those same words. LSI considers documents that have many words in common to be semantically close, and ones with few words in common to be semantically distant. This simple method correlates surprisingly well with how a human being, looking at content, might classify a document collection. Although the LSI algorithm doesn't understand anything about what the words mean, the patterns it notices can make it seem astonishingly intelligent.
When you search an LSI-indexed database, the search engine looks at similarity values it has calculated for every content word, and returns the documents that it thinks best fit the query. Because two documents may be semantically very close even if they do not share a particular keyword, LSI does not require an exact match to return useful results. Where a plain keyword search will fail if there is no exact match, LSI will often return relevant documents that don't contain the keyword at all.
[ Source: http://javelina.cet.middlebury.edu/lsa/out/lsa_definition.htm]

Here are a listing of papers on the LSA topic from the thread:

Added: Check out SEO Book's LSI post, very detailed and easy to read. Good work.

Previous story: AdWords Relevancy & AdSense Revenue Take a Hit
 

Comments:

orion

02/05/2005 12:32 am

Old article. With just talking and few old graphs. Still, show me the calculations and how a commercial search engine implements LSI. The article does not explain this. Please, let not be blind followers. Orion

Festplatte

04/14/2005 09:12 pm

Anyway will pages that do not contain the keyword be the second choice to present the search engine user. And the results that do not contain the keyword may vary too much.

blog comments powered by Disqus