A couple hours ago, Google's Andrey Lipattsev a Search Quality Senior Strategist participated in a Q&A with Bill Slawski, Ammon Johns, Eric Enge and Anton Shulke.
Andrey Lipattsev gave one of the most detailed explanations on why it is so hard for Google to confirm or deny algorithmic updates, including why there was so much confusion around what it means that an algorithm is part of the core algorithm. Let me summarize the long explanations and then share transcripts (I type fast) and the video):
What Does Core Mean?
In short, it means that the algorithm that is part of the core algorithm is now pretty solid and consistent that it can just join the family of algorithms that Google knows works. Google will not update it that often anymore, they trust it just works. It will continue to run, just like PageRank runs.
In fact, he said he doesn't believe there is a list of which algorithms are part of core and which are not.
Why It's Hard To Confirm Some Update
Google is constantly making changes, some are experiments, some are samples, some are larger updates as well. Within some algorithms, they might make tweaks daily, weekly, monthly. Some algorithms they may launch once and not update it again. We've seen examples of the algorithms that get tweaked and then not get updated for a while with Penguin. With Penguin it was updated a lot after Penguin 3.0 launched and then laid dead for 15+ months.
In addition, some algorithms use continuous crawl data, some crawl their own data periodically, some have their own fetch systems. The cycles this happens can be on a pattern or sporadic or never, it depends on each element.
Finally, if Google makes a tweak to one line of code, is that an update. Or does it take 50 lines of code, 500 or 5,000 lines of code. When is it enough of a tweak to call it an update?
Should We Care About Updates?
Andrey goes back to the question about SEOs obsessing about updates. Ultimately build the best site and Google should do the rest. But he understands this is not always realistic.
Here is the video embed, which starts at about 8 minutes in:
Here are some transcripts that I took when listening (hope it is 100% accurate):
So with core, “it is less about the functionality, which means it probably doesn’t change that much over time, and it is more about how we perceive it, in the context of the algorithm. Do we still think this is an experimental thing, it is running for a while and we aren’t sure how long it will last. Or is it like PageRank, it is part of it, it will always be there, at least in the foreseeable future and then probably call it in certain context part of the core algorithm.
We have on one hand the big launches, so an engineer sits down, writes up a whole bunch of code, runs some tests, see how it affects a sample of results, how they compare to each other, are they better, are they worse, get some stats back on that. Then they go to the launch committee, and say look, if we do this thing our results may be so much better... The launch committee says okay, let’s launch this.
Then there might be something like Penguin was, Panda was, they sit down and write this huge chunk of code and then was going to make a huge difference. That is a big change, that is a big one.
That code, is not necessarily static after that, it might not change for years, no one might touch it for a really long time. Or on the contrary it might be something that our engineers could submit changes for every week, tweaking little things. How do you treat that? If someone goes in and tweaks one line of code, among tons of lines of code, is that an update, is that a launch? What is that? It does go through approvals, where people look at others changes, there is a quality check procedure in place. How do we qualify it? If someone tweaks that little thing, is that Penguin updated every day or do we wait until five lines of code is changed or fifty lines of code is changed or 500?
So that makes it more difficult to talk about it because small incremental changes like that may or may not happen in different elements of the code. Continuously or separately, as you know or not. As you can imagine, any machine system would be exactly the same.
And then the third part of it. Is the data collected by these algorithms and processed by them, primarily the URLS, web sites and maybe the additional meta information around them. Our primary way requiring information about the web is crawling... There are systems that use that, there are systems that may go out themselves, there are systems that will go to talk to the results of what we fetch continuously, there are those that would do it less regularly. And the same system may do it at different intervals, depending on different conditions.
Then at 13:16 mark he goes through an example of this with Penguin.
Thus talking about monthly updates, weekly updates, daily updates becomes less meaningful. Because you are not really sure what you are talking about and at the end of the day what you care about the most, is the particular site affected by the algorithm you were worried about.
Ammon Johns then said "once they forgot how it works, it is core."
Andrey Lipattsev replied "that is exactly right."
Forum discussion at YouTube