Google Responds To Impact Of Blocking CSS & JSS & Panda 4.0

Jun 24, 2014 • 8:23 am | comments (21) by twitter Google+ | Filed Under Google Search Engine Optimization
 

google panda coffee java cupYesterday we covered some SEO theories around blocking JavaScript & CSS triggering Panda 4.0 issues. I didn't honestly believe there was a relation, based on the example provided and very few other sites reporting the same effects but now we have a response from a Googler.

Well, maybe the response is a bit Google-like and cloudy.

One webmaster posted the theory on Google Webmaster Help and John Mueller responded to the specific case at hand, not necessarily Panda 4 and how it related to blocking CSS & JavaScript. But he did respond to the question about being hit by Panda and blocking content via external files.

John Mueller of Google wrote:

Allowing crawling of JavaScript and CSS makes it a lot easier for us to recognize your site's content and to give your site the credit that it deserves for that content. For example, if you're pulling in content via AJAX/JSON feeds, that would be invisible to us if you disallowed crawling of your JavaScript. Similarly, if you're using CSS to handle a responsive design that works fantastically on smartphones, we wouldn't be able to recognize that if the CSS were disallowed from crawling. This is why we make the recommendation to allow crawling of URLs that significantly affect the layout or content of a page. I'm not sure which JavaScript snippet you're referring to, but it sounds like it's not the kind that would be visible at all. If you're seeing issues, they would be unrelated to that piece of JavaScript being blocked from crawling.

So is John saying that if you block content, then it may impact the Panda algorithm? Is he saying that? Or is he saying that the content that is blocked, Google can't see anyway and it has no impact on Panda? Or maybe it may or may not have an impact on Panda because Panda is about content and maybe layout?

See how this can get confusing. What is your take?

Forum discussion at Google Webmaster Help.

Update: John responded again basically implying it is not Panda. He wrote:

Looking at your site, those disallowed scripts are definitely not causing a problem -- it's primarily an issue of problematic links here. That's what I'd focus on first. Since there's a manual action involved, that's something which you can work on to resolve.

He then aims to answer the specific question at hand head on:

Regarding your more general question of whether disallowed scripts, CSS files, etc play a role in our Panda quality algorithm: our quality algorithms primarily try to understand the overall quality of a page (or website), and disallowing crawling of individual aspects is generally seen as more of a technical issue so that wouldn't be a primary factor in our quality algorithms. There's definitely no simple "CSS or JavaScript is disallowed from crawling, therefore the quality algorithms view the site negatively" relationship.

He goes on in more detail, so check out the thread.

Image credit to BigStockPhoto for Panda Java Mug

Previous story: Google Glass Now Available In The UK For $200 Extra
 

Comments:

Jasper Oldersom

06/24/2014 12:51 pm

To me it seems like he is saying "if you don't allow us to crawl it, we can't credit you for it". But that would imply that they don't discredit/penalize you if you disallow them. It would just be hard for them to see what's in there. Pretty confusing indeed.

Peter Nikolow

06/24/2014 12:52 pm

There are running discusson on Joost about this: https://yoast.com/google-panda-robots-css-js/ On mine case (i post this as comment there) once i unblock crawling i suddenly starting receiving 404 errors in WMT smartphone crawler. I suspect this due JavaScript because that links didn't exist there.

Jaimie Sirovich

06/24/2014 12:54 pm

@rustybrick:disqus My theory is that Chrome & V8 was a lovechild of a platform Google could use to render pages en masse with more efficiency and a solid securty model/isolation. Chrome was just a side effect, and Eric Schmidt allegedly never wanted to get into the browser business. So if Google is using Chrome to understand your pages and where content is with more confidence, be careful with limiting its access to that understanding. It only makes sense for them to use a rendering engine and JavaScript more and more when developing new factors so long as it scales. But nope, no way to prove it. I just think http://ipullrank.com/googlebot-is-chrome/ on some level.

for-they-know-not-what-they-do

06/24/2014 12:59 pm

I think he is saying... "Look mate, I have not got a bloody clue. All these algo changes are bad enough and then you lot go an block me bot from your CSS and Java. So in a nutshell, on a Thursday when the moon is in full cycle and the summer equinox has passed it should not affect you. However, should the state of the moon fall partially between Jupiter rising in Sagittarius you might come unstuck." So there you have it. Simple!

Jaimie Sirovich

06/24/2014 12:59 pm

Not rewarding can be the same as penalizing, effectively. No?

interartsllc

06/24/2014 01:10 pm

Not sure John is the complete authority on this topic at Google and so I wouldn't read too much into this. But, he's just saying that if GoogleBot can't crawl your CSS and Javascript files there's a chance that not all the content on your site will be found. He also states that this is a "recommendation".

Martin Oxby

06/24/2014 01:19 pm

Not sure it's directly related to Panda, but with Panda being about on-site issues for the most part, it could easily be confused. This seems to be more about ensuring Google gets the most info about your site as possible so the machines can imitate humans as closely as possible. If this guy fixes the crawling and very quickly gets a boost in rankings, then it's not Panda because it won't fall into the monthly refresh.

Michael Martinez

06/24/2014 01:30 pm

Again, since Joost's case study documents a drop in traffic that occurred almost 2 weeks before Panda 4 rolled out THIS IS NOT A PANDA ISSUE.

Barry Schwartz

06/24/2014 01:31 pm

I said that. But still, John’s response is weird.

Jasper Oldersom

06/24/2014 01:36 pm

I agree. Both will drop your rankings. But i think there is a difference between "we don't recommend" and "if you use this practice, we will assume you try to game Google" like building a crappy link profile does.

Marie Haynes

06/24/2014 02:05 pm

It sounds to me like John is saying that it's simply good practice to allow Google to be able to see all of your site. Could it contribute to Panda? Sure, I think it's possible. If your site, with javascript and css blocked ends up looking like thousands of empty pages then to Google it will look like your site has a lot of thin content. To me though the dates don't line up with Panda in Joost's case. But then again, it's possible that there were Panda refreshes that we didn't know about. Really though, it seems quite obvious to me that if you had a bunch of content that Google couldn't see and then you made changes so that Google can now see it, that of course you're going to see an increase in search traffic.

Widget Links Suck

06/24/2014 02:14 pm

I agree with this statement 100%. Refer to the questions Amit Singhal shared after the first version of Panda. It screams of on page quality, content, and DESIGN. Also, with all these recent changes/improvements to fetch as Googlebot, all of which or visual, they have to be looking at MORE than just the content on the page. Again, as Jaimie pointed out, if it can scale.

Michael Martinez

06/24/2014 02:26 pm

If there is a Panda connection at all to CSS/Javascript crawling it will certainly be helpful for Google to clarify that. Or they could at least say, "Oh, yeah, you can now fix Panda problems and see results within 2 weeks." No one seems to have noticed the significance of that. A 2-week fix for Panda seems incredible but I have read a couple other case studies (not about CSS and Javascript) that claimed 2-week fixes for Panda 4.0. If Google has made its algorithms more responsive to FIXES I'd think everyone should want to know that.

Alin

06/24/2014 02:52 pm

John posted again about an hour ago on that thread: "Looking at your site, those disallowed scripts are definitely not causing a problem -- it's primarily an issue of problematic links here. That's what I'd focus on first. Since there's a manual action involved, that's something which you can work on to resolve. Keep in mind that even after resolving the manual action, it can take a bit of time for all of our algorithms to take those changes into account (we have to first recrawl those links, see that they're removed, disavowed, nofollow'ed, take that into account in compiling the data for the algorithms, and then make those changes public -- this can sometimes take a half a year or even longer, depending on how many problematic links are out there, how long they've been there, etc). My recommendation would be to really clean up any link issues as completely as you can, so that you don't have to worry about them again in the future, so that you don't have to go through several rounds of reconsideration requests. Regarding your more general question of whether disallowed scripts, CSS files, etc play a role in our Panda quality algorithm: our quality algorithms primarily try to understand the overall quality of a page (or website), and disallowing crawling of individual aspects is generally seen as more of a technical issue so that wouldn't be a primary factor in our quality algorithms. There's definitely no simple "CSS or JavaScript is disallowed from crawling, therefore the quality algorithms view the site negatively" relationship. A lot of sites disallow crawling of JavaScript & CSS for historical reasons that are generally no longer relevant. If your JavaScript or CSS files significantly affect the content or layout of the page, we recommend allowing us to crawl them, so that we can use that additional information to show your site for queries that match content which isn't directly in your HTML responses. While unrobotting that content would make things easier for our algorithms to pick up, it would be incorrect to say that not allowing crawling would automatically trigger our quality algorithms to view your site negatively. For more information about our quality algorithms (which, for the moment, isn't negatively affecting your site), I'd recommend reviewing http://googlewebmastercentral.blogspot.ch/2011/05/more-guidance-on-building-high-quality.html Hope that helps!"

aunty seo

06/24/2014 03:59 pm

Hi Marie.. Your assessment is spot on (about empty pages & thin content -above the fold)! I have a web property which looks like this background color Home (menu) . . . (many more) Next link (menu) . . . (many more) The story in short. Been hit with strange algo penalties since hummingbird every couple of weeks (normal traffic - down 90% - normal traffics - down 80% - traffics up "after panda cleanup" - down again 90%) Since this idea came to light, I have allowed all blocked CSS & JS and waiting to see what happens next! I believe this is more of a hummingbird issue (next gen algo? - not even G can confirm - even John - as they do not know the side effects unless tried and tested) Wake up boys & girls....

auntryseo

06/24/2014 04:02 pm

addition - clarification "I have a web property which looks like this" - on fetch and render in GWT

Jamo

06/24/2014 04:27 pm

Why is it so hard for Google engineers to answer a question with a straight answer? I know this can be problematic with engineers as they can't relate to those under their intelligence spectrum. My guess is that Google puts all "public" facing engineers through an exhaustive training on how to answer a question properly without a real answer at all. Keep them confused!

just guy

06/24/2014 05:49 pm

because they not have answer. they algorithm programmed to ban everything, except white listed sites. Google = corruption.

badgoogle

06/24/2014 05:53 pm

they update information with fantastic speed - once per 6 monthes. It in 2014, even bing do it faster. What about manual penalties by indian "reviewers", here is nothing to say. It just a bullshit. What about historical reasons, which not longer relevant - where is current "relevant reasons", google??? It easy to kill business making lot of people blind. But it corruption!

qqq

06/24/2014 05:57 pm

make sites for human, not for google. algorithms unable to see your site in reality. then more scroogle algorithms know about your site, than worsest.

Jaimie Sirovich

06/25/2014 04:22 am

It may have nothing to do with Panda, but it's still interesting if it has to do with any part of the algorithm mix.

blog comments powered by Disqus