A Search Engine Watch Forums member asks about the tremendous discrepancy between results on three Keyword Research tools: Keyword Discovery, WordTracker, and Overture. He is concerned that these tools provide no accurate information.
One of the members provides some excellent feedback. The reason for such discrepancies is related to the source of the information, the time the information was gathered, and the keywords are processed differently as well. He suggests that you should use all of the information and get averages to resolve any inconsistencies.
The main challenge is that all three, WT - OV - KD, are very different beasts and so there can be no direct correlation between them.
For example:- 1) all three draw their data from very different sources, 2) their data is drawn over different time periods, OV one month, WT three and KD twelve months, and so the count figures would need some rationalisation 3) all three have different data cleansing systems, eg., OV de-pluralises, de-punctuates and sometimes alphbetises the words in the search phrase, so again that muddies the correlation waters,
and so on.
So you have to figure out what parameters and algorithms you are going to apply when combining the data.
I would suggest the first step would be to generate monthly equivelants (WT/3 & KD/12) and then perhaps do some averaging. Even then, the eyeball is perhaps the best filter.
Kinda like comparing Oranges, Lemons and Grapefruit in that they are all from the Citrus family but are very different and individual fruits. Combine their juices though and you have a rather interesting Citrus tasting drink with some attributes of all three.
Another observation came from a keen member who acknowledged that the discrepancies may be intentional, but that there are other factors as well. All of the tools should be used together.
I'd add that WT draws from a very small sample of web searches, so that its data isn't very useful for infrequently-searched phrases. I've seen WT skewed dramatically in certain keyword areas, probably by search marketers in those areas who wanted to muddy the waters.
OV data, on the other hand, suffers from lots of automatic rank checking, which tends to happen in more competitive and frequently searched terms, so it's exponentially skewed at the top end. OV has recently forced stemming on certain searches, I've observed, which makes it useless for assessing many phrases.
The Google External AdWords tool, another tool you should put into the mix, is skewed by the algorithms that Google is applying for AdWords buyers. It has the largest sample size, is the least likely to be skewed by test searches, and of course has demographics that come from Google.
This is some very good information. You can read more at Search Engine Watch.