SEM is a peculiar field, because it strands between the high-tech world of targeting algorithms and the old-fashioned world of human grunt work. Most of us in the industry are more comfortable talking
about the high-tech aspects of SEM than we are about the generally low-tech roles that humans play in it, but I'd argue that it's very often the human element that makes the difference between a
winning and a failing search campaign.
We often take over search campaigns that are riddled with basic errors. For example, ad groups (which ideally should only have a handful of
thematically-related keywords) may instead be populated with hundreds of marginally related keywords, Broad Match and DKI (Dynamic Keyword Insertion) may be over-used; also negative keywords may have
been badly neglected, and other basic errors may be handicapping the campaign. In some cases the campaigns that we inherit are so poorly organized that they've got to be reconstructed from scratch.
Often, these basic errors aren't the result of any particular incompetence on the part of the previous SEM agency or in-house team, but the natural result of progressive "tweaking" over time in
response to different client/management demands: a case of "too many cooks spoiling the broth."
Undoing this complicated mess often means starting with a clean slate, and this is where all
the grunt work comes in, including keyword expansion, campaign reorganization, engine/network selection, and other basic tasks. The objective here is to establish a campaign which is as close to
error-free as possible, and while there are tools which can assist in all of these processes, the importance of human oversight and human QA cannot be overstated.
The exciting part begins
when the debugged campaign, after all critical infirmities have been removed, is allowed to "take flight" again. This is where the second level of testing occurs, including creative/offer testing, bid
range/elasticity testing, and the controlled application of additional targeting filters such as geo, daypart, and demographic. Second-level testing may take some time to establish a new performance
baseline, especially if the given campaign is one in which long-tail keywords predominate over power (high volume) keywords. The rewards, of course, come when one can report to the client that a given
campaign, once thought to be an unprofitable, hopeless morass, is once again a functioning, profitable one.
Clients are to be forgiven for concluding that some kind of technological magic
wand was waved over the campaign to achieve this. In reality, what made the difference was good old-fashioned labor and the application of best practices. This is why the issue of SEM staff
compensation (which I addressed last week
) is such an under-rated issue in our nascent field. One can have the best technology in the
world, the best strategy, and the best analytics, but unless one can marshal experienced, motivated people to execute, one can never really expect to compete in today's competitive environment.
Keep all of this in mind if you're at SES New York this week (I plan to be there). There will be plenty of vendors and panelists directing your attention toward exciting new "breakthrough
technologies to identify your best customers" -- and there's nothing wrong with that. But while it's always tempting to become enraptured by advancements in targeting, one must keep the all-important
basics in mind, because unless your campaign has a sturdy foundation, all the sexy, targeting bells and whistles in the world won't help gain ROI and long-term market share. Don't bypass the sessions
that may seem "basic" to you but may give you a firm foundation upon which to build future growth