Now that the latest changes to “close variants” of exact match keywords have started rolling out more broadly than the last time I wrote about it, I’ve started to hear anecdotes about some of the impact. At a recent conference I attended, there were at least a few advertisers reporting significant changes in performance due some close variants that didn’t seem so close after all. While I’m sure there are many advertisers who are benefiting from this change with cheap additional conversions, the fact that the results are mixed makes it worth looking at four ways that Google Ads Scripts can help restore control for advertisers who are impacted.
Last month I shared a script to help you easily see the impact for your own accounts but this month I’ll take that script one step further. With the latest version (below), you will be able to automatically add negative keywords for close variants that aren’t working well. I’ll also share two variations of similar scripts for automating adding negative keywords, and a script for getting high level match type performance reports.
But first, to remind everyone of the latest change to close variants and how it fits in with other changes made by Google, here’s a timeline of how close variants have evolved over the years.
A timeline of how match types including “close variants” have evolved over time.
Updated examples of exact match close variants
With the query analysis script I shared last month, it becomes really easy to see specific examples of what Google considers to be “close variants.” But when I first shared the script, many advertisers were still running on the 2017 version of close variants so it’s probably a good idea to look at the latest results from the script now that most advertisers should be on the 2018 version. As Google said in their announcement, the changes are expected to roll out through October of this year. Here is an example of some close variants of exact match that I found on Oct. 19.
These search terms are all relevant but appear to be different enough that they would likely benefit from being managed as separate keywords with unique bids.
Don’t undo all close variants
In the table above with 4 examples of not-so-close close variants, there were another 2000 close variants that were much closer and the four I picked out as less relevant had only about 80 clicks while the others had 28,000 clicks. While individual results may differ, it’s probably fair to say that close variants can drive good traffic that’s worth keeping.
There is a script available that undoes all close variants but I prefer to be a bit more picky and only eliminate close variants that are driving poor performance or that are just too semantically different. There are many close variants that are actually helpful because they let advertisers manage accounts without having to worry about every possible typo or other close variation as extra keywords.
For example, my company’s name is “Optmyzr” and there are a few vowels we had to leave out of our name to be able to get the domain name. As a result, a lot of people spell our name wrong in search and it saves us a lot of time managing those misspellings as keywords by letting close variants pick up the traffic automatically.
I might be dating myself, but I was reminded of the 597 ways people spell Britney Spears incorrectly. There’s clearly a benefit to not having to manage all typos manually.
A script for seeing aggregate performance by match type
A really old script that is all of a sudden very relevant again is the match type performance script. While it doesn’t show query-level detail and hence can’t be used for managing accounts, it helps advertisers see the big picture of how different match types impact their performance and inform strategy decisions like how much effort should be put into managing different types of keywords.
This script helps advertisers see the big picture by reporting stats aggregated at the keyword match type level. There are two tabs on this report, one for keyword match types and another for search term match types.
Section 1: Keyword match types
When advertisers talk about match type, they most often mean the match type they chose for the keyword. They don’t refer to how those keywords actually matched to queries. For example, when looking at the performance for the phrase match keyword “optmyzr scripts” in Google Ads, we see the data for the “keyword match type.”
The unsegmented view of keywords shows performance based on the advertiser-selected match type, without considering its relationship to the query.
In this example, we see 69 impressions for a phrase match keyword.
Based on this way of counting, the script aggregates the performance for all keywords and returns the totals.
In this example, keywords added as exact match have a far lower CPA than those added as broad matches.
Note that there is no data for broad match modified (BMM) keywords because that’s not an official match type and BMM data gets lumped in with broad match in Google’s reports.
Section 2: Query match types
Building on the example from section 1, advertisers could look more deeply into their keyword data and add a segment for “search terms match type.” Now the example from before will look like this:
Of those 69 impressions for a phrase match keyword, 51 impressions happened when the query was exactly the same text as the phrase match keyword (or an exact close variant), hence Google counts it as an “exact match.” Now the aggregated stats would look something like this in the output from the script (see script below).
The script generates Search Terms Match Type performance data to help advertisers understand how “close variants” perform relative to non-close variant match types.
The API actually reports close variants separately so the above report tells us very clearly that in this account exact match close variant search terms are driving conversions at a lower CPA than pure exact match. Perhaps an indicator that this account could benefit from a keyword buildout. Or perhaps just an indication that Google is doing a good job driving additional conversions for the account.
A script for managing close variants by performance
So now you’re equipped with two scripts that help gauge the impact of close variants on your account. If you find that performance is impacted and you want to start controlling close variants by adding some negative keywords, you can use this script (see below).
I took last month’s reporting script and added the capability to add negative keywords based on some rules you specify.
Rule 1: Automatic negatives for very different words
Like I said before, I like how close variants send me traffic for lots of typos of our brand name. These misspellings tend to be 1 or 2 characters different from my keywords. A good way to count the difference in characters between the query and the keyword is with the Levenshtein distance. This score calculates the number of characters that have to be changed, added or deleted to turn one string into another.
The Levenshtein Distance is an algorithm for calculating how many edits are required to turn one string into another string. In this example, we use it to calculate how close the word “pajamas” is to “pjs.”
In this example, we noticed that our keyword “pajamas” was shown when the query was “pjs.” This string transformation required 4 steps so it has a Levenshtein Distance of 4.
In our script, we can set the threshold for the largest Levenshtein Distance before we add the query as a negative keyword.
Rule 2: Automatic negatives for low performance queries
The second approach afforded by this script is to add negative keywords for any close variant query that has too much cost and no conversions. This assumes that there are at least some clicks on the keyword so it’s a less proactive approach than rule 1 but it’s also less aggressive and lets the performance data drive the decision rather than the semantic analysis.
Of course there are many ways to determine “low performance” and this is just one example. With a bit of experimentation, you should be able to change the query to make it follow your preferred method for when to add negative keywords.
A script to manage close variants in SKAG ad groups
There’s one more script you might find useful to deal with close variants. This one comes courtesy of Steve Hammer. Rather than proactively blocking all close variants, or doing so based on performance, it blocks only close variants that interfere with a single keyword ad group (SKAG) management structure.
SKAG ad groups are often deployed in an alpha/beta account management structure. The alpha campaign is populated with SKAGs based on queries that drove conversions in the past. Each ad group has one exact match keyword in it. When queries occur with any variation of that exact match keyword, those are expected to be directed to the beta campaigns where keywords with less restrictive match types are kept, often in ad groups with several related keywords.
You can grab the script here and it will identify your SKAG ad groups based on a common element in the ad group naming convention (usually the presence of the text “skag”). If it finds close variant queries occuring in those ad groups, it will add them as negative keywords.
I love Google Ads Scripts because they let us quickly automate tedious processes and reports that are necessary to keep agencies running while they work out new strategies based on the ever-changing PPC landscape. When Google makes exact match keywords less exact, that’s a prime example of where we can use scripts to:
- Better understand the high level impact with a match type performance report
- Get granular insights into what Google means by “same intent”
- Undo the close variants we don’t like
I hope some of the scripts shared here today will prove useful in your day-to-day PPC management.
Query match types script:
Managing close variants by performance script:
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.