Do Search Engine Spiders Pick Up Url's in a Javascript Menu?

Sep 24, 2004 • 2:22 pm | comments (0) by | Filed Under SEO - Search Engine Optimization
 

This subject has been discussed before in length, but solutions have remained personal opinion. A thread over at Highrankings addresses some interesting issues regarding whether or not a search engine spider can extract links from a javascript menu. If the javascript is external, then it can not read past it, but what about a menu that is completely in the code? Will a <noscript> tag work well as a solution to a javascript menu? I have played around with the <noscript> tag when using a javascript menu, and as one member points out its may not be the most aesthetic solution in the serps. What you get is a long list of urls in the snippet instead of a description.

So what happens if you have 200 urls in a javascript menu. It would make it really cumbersome to include all 200 urls in a <noscript> tag. Some of the member discuss solutions to this. I can imagine if your site is structured correctly, or includes a site map, then you might be able to get away with only several urls in the <noscript> tag.

Additionally today, there is a great thread at Digitalpoint about a new Googlebot lurking around that is HTTP 1.1, and spidering many levels deep in a single pass. This is interesting because as one of the members points out this is a test of Google's at spidering Javascript urls. So maybe there is better solution to a JS menu than a <noscript> tag and the search engines are finally able to do it all successfully. Check out and discuss the thread about Googlebot 2.1 and Javascript

Previous story: Getting Indexed Through Google AdSense
Ninja Banner
 
blog comments powered by Disqus