Ads in a Quality Score World
Moderated by Danny Sullivan. “It used to be so easy with the ads…pay the most money and you get the most clicks. Now it’s like there are black boxes everywhere, controlling things. How do you succeed? We will be defining success.
Josh Stylman from Reprise Media. How is industry defining “quality score?” Method was originally defined by Google. Shows some historical context explaining how PPC started with GoTo. Started with the simple rule that “whoever pays most is #1.” There were analogies to the financial services market since you knew what your competitor pays. “Thank Google” for introducing idea of CPC X CTR, which made advertisers become more aware of the copy they produced, as well as forced bid management. Why did G change the auction? Control over #1 position, minimized less relevant ads, and of course maximized G’s revenue. In 2005, very quietly, G launched Q score, “the introduction of the black box.” Added other variables, some well known, some that G is less forthcoming about. “We will not tell you exactly” what this model is (in typical Google fashion). They found that CTR did not equal relevance…made for less relevant landing pages. People were maximizing for CTR, and brining people to m ore inks. Arbitrage will be covered in another session. Again, he feels that engines want m ore control over the market and thus profits.
He shared an example of a search for “Google” at Google in 2005, which had a bunch of ads on the side which had “snuck in.” In 2006, these ads no longer appear, which means the system has learned to filter out irrelevant ads. Goes over a case study for “Feedcast,” and then “credit monitoring” client. Showed how Q score had an impact in as little as one day, and how the position cost $11 at the beginning of the day, just to gain entry, but once q score had taken effect, it only cost $4. There are unintended consequences as well, such as artificial CPC inflation. Also, the engines define what “quality” is, which makes it difficult to identify from an agency perspective. Changes to a campaign can also affect quality score. Every time you change a variable, your q score gets reset. This can make advertisers worried about making changes. All this leads to “putting the M back in SEM” People had grown used to bid management process running things, but now advertisers must focus on end-to-end management.
Andrew Goodman from Page Zero Media. He introduces this session as being a very important topic. He will talk about the three generations of paid search ad ranking. First was GoTo/Overture model, pure bid-for-placement. Variation was AdWords 1.0, CPM-based, fixed. Then Variation2 was Overture folding in “click index.” Then came AdWords 2.0 introducing Max Bid + CTR, which led to various CTR “cutoffs” and other factors. Then what he calls “AdWordss 2.5, and then where we are now in AdWords 2.6..
It is normal to have high quality score across the board, he showed an example where only 9 of 428 keywords were deactivated…in fact they were only deactivated because of low bids instead of q score. Google just told him there are actually two quality scores. One affects minimum bid, and one effects ad rank. In the old days, there were rules which have transferred to being scores. Keyword status is based on either predictive or historical CTR. “Other relevancy factors” move in, including tightness of relationship between keyword, ad, and landing page. Landing page and site quality are now also a factor. Some types of keywords can lead to a “shorter leash,” basically. Shows a list of keywords that have a very high minimum bid because of the lack of quality of the page. He says this was an example where he lumped all keywords together and only used the home page as a landing page. The problem is when he goes to fix it, he isn’t 100% clear as to if it worked or if G even noticed. He showed how he changed the landing page for 1-800-got-junk, and how it took some time to see what happened. Suggest backing up changes with some sort of contact with Google to try and force change.
How it works? See Google’s guidelines, they are valuable. There are human coders used to “train the algorithm.” We do not know the exact variables in the algorithm. The principles for ad q score rates are derived from user feedback on a large scale. This is scaled down to the algo. Arbitrage does make up a large portion of the user complaints they receive. The AdsBot crawls landing pages looking for “markers” that will indicate low quality pages. CTRs are still key…they still determine where you show up in the auction. It is really unclear how the other relevance factors weigh into the algo. He showed an example of a search for “jelly beans,” and how a Yahoo ad may have been affected by low CTR. CTR is normalized for ad position, so if you are in second place, they will take that into account from not being in first place, for example.
Is G really targeting arbitrage or not? He doesn’t want to get into it, but shows that some of their organic results aren’t much better than arbitrage landing pages, using “jelly beans” as an example again. Shows a couple of case studies where in one case, AdsBot thought the landing page was a “spammy” site, which actually wasn’t. Another case was an example where some client upped his bid to $100, and then had a popup which told people “next time you visit this site, please don’t use an SE, because each click coats us money (laughs).. Finishes with “arbitrage versus garbitrage…” why doesn’t g target some sites that seem worse than others? Also gives a quote that even though G operates paid and organic separately, do not think they are siloed."
Jonathan Mendez will talk about delivering contextual relevance. Goes through an example of a neat segmenting the audience tactic used by Starbucks in the mobile environment. We are lucky as SEM’s because nothing is better than delivering contextual content than search. We are able to segment, target, and control the engagement through the landing page experience. The holy grail is the right message to the right audience. So what is q score? Ads need to be relevant to keyword as well as landing page. The ad is the “bridge to relevance.” It is important both in a pre-click and post-click stage. If you are lucky enough for the searcher to click on the ad, you have to meet or exceed their level of expectation once they see the page. This is a holistic view of providing relevance.
How to set up campaigns? Use root and stem relevance. He showed an example grouping in one case “retirement” as the “root” and other factors as the stemming, such as “savings, planning, etc.” Showed an example of a multivariate test with some keywords that will be further covered in a later session. Says the best measure of relevance is the conversion rate for ads. Showed how a simple change in an ad title caused for a jump in conversion for around 5% avg to as high as 11%. They found that the description line had a much greater effect on conversion that the title. Evangelizes that busing the right message will ensure relevance. Discovering relevance begins with understanding your audience. You must understand their goals: primary goal, secondary goal, and latent goals. Primary could be “find a jacket,” secondary could be “find a down jacket,” and the latent goals are harder to find. So maybe the mention of an extra pocket or the goose down could be what turns them into clients.
You have to understand what the customer needs…you have to segment in order to drill down. Again, search is wonderful for this because you have their keyword phrase, so you can sometimes find the latent goal. Shows a couple examples, including a search for “Pope Benedict,” and how one is bad and one is relevant. Geo-targeting is key to ad effectiveness, helping people find more “geo-relevant” options. It is important to emphasize relevance from ad to landing page. If you can even imbed some relevance to a form, for example pre-filling the geo area, the visitor will feel better about the experience. Talked about keyword source reinforcement, where using the keyword more prominently on the landing page lifted the conversion rate. You have to understand the way your users think about content. He did some user observation about searchers for “telescopes,” and how the user navigated a site to find the most relevant page. Went through some other good examples of creating a relevant experience. He knocks NY Times for “clickjunk” and some other issues with helping people understand the relevance of the page they are on.
Brian Boland from MSN will be a QA speaker and gives a nice introduction to how they treat relevance as being very important.
QA follows, but we do not cover that so you have to make it out to SES!
Note: sorry I did not cover Video Search Optimization this morning, was still settling in.