There is much speculation about which professions AI will eliminate. Depending on which generative AI tool you ask, you will get a list of 10 to 30 criteria. Many of them are important
only on the surface.
I believe that the top three considerations are:
- How easily and how effectively can a particular problem be solved with machine learning (ML) vs a
human?
- What is the total addressable market (TAM) for the solution?
- How much training data is available, and how clean is it?
The first two criteria determine the value-to-effort ratio — the main factor in product development and investing decisions. The training data is a necessary building
block.
When thinking about the first criterion, consider what is more natural for a machine vs. a human to do. As a land mammal, you will not out-swim a Great White
shark. But, given the advantage of an opposable thumb, you can surely out-climb it in your natural habitat - the dry land.
advertisement
advertisement
When talking about AI, most folks mention
generative AI, large language models (LLMs), and self-driving cars.
Indeed, those technologies are impressive, and they are just getting started. However, while humans take speech and vision
for granted, Natural language processing and computer vision are extremely difficult and unnatural problems for AI.
If you have ever built a simple image classifier from
scratch, you will know there is much to it.
Many calculations need to happen just to recognize image borders. Then, additional neural network layers combine borders into basic shapes, more
complex shapes, objects, etc. Chat GPT and Tesla full self-driving cars are like sharks that, with investments of hundreds of billions of dollars, have been trained to compete with human rock
climbers.
What helps justify the massive investments is virtually unlimited TAM.
It helps that a lot of training data, such as publicly available
text and images, exist.
Programmatic campaign management is on the opposite side of the above criteria. Campaign optimization tasks are incredibly easy for machines to do.
These are just calculations, after all - the computers’ natural domain. Estimating how each of the millions of mobile apps and web pages will perform for your new ad campaigns is just as easy
for ML as for you to figure out how to lift your foot over a doorstep when you enter your house.
Planners are entirely out of their element when attempting to compete with
machines in campaign optimization.
Could a planner do the same calculations that ML does, for example, using Excel? If they knew the proper math, then yes. But it would take them months to do
the same math that ML does in milliseconds. Humans are utterly outgunned in this competition.
So why do planner jobs still exist?
Three
reasons:
Relatively limited TAM. The tasks are company-specific. There is no universal algorithm to solve all of the
DSPs’ and agency trading desks’ optimization problems. One-size-fits-all solutions do not work.
Fragmentation of data. Ad Tech companies guard their data–and for the right reason. Whereas LLMs can train on Wikipedia, the BookCorpus, and the billions of web pages they
can crawl, each DSP algo trains only on the data generated by the ads shown by the DSP.
Inertia
and lack of knowledge. Despite the above difficulties, DSP ML solutions evolve rapidly. For now, media executives who also lack data science knowledge fail to recognize the obvious fact that the value
that planners add becomes smaller and even negative. After all, to check someone else’s math, you need to know math.
The time is not on your side.
Open-source ML libraries and computing power are becoming rapidly better and cheaper. Even with a relatively low TAM, having a robust ML solution becomes more and more feasible for most ad tech
companies. You will be replaced very soon -- unless you adapt.
Play to Your Strengths and Become the Master of the Machines
A wise person
once said: “AI will not replace you. A human using AI will.”
During the dawn of the industrial revolution in early 19th century England, the members of the infamous
Luddite movement correctly determined that steam-powered machines were becoming much more productive than human laborers in making textiles, metalworking, printing, and other tasks. They went about
the upcoming threat much like many of today’s digital media professionals do: they continued with their manual labor. Many Luddites engaged in protests and even tried to destroy the
machines.
Yet, some savvy folks recognized that instead of competing with the machines, it was more advantageous to be the one who enables the machines.
In today’s
“dark factories,” a handful of humans oversee thousands of robots via a few monitors from a small, comfortable room. They rarely have to step onto the factory floor (hence, no lights are
needed). These workers are well-paid and have far more rewarding careers than the manual textile weavers of the 1800s. In my opinion, planners must evolve into this type of role — and they must
do so quickly.
Here are some things you, the experienced planner, can do to succeed as a master of machines. Each of them involves playing to your strengths.
- Set
Goals and Constraints. ML cannot do this step. You must understand what is important to your clients. This task is challenging, given the clients often do not know what they want. Based on this
insight, you can define the goals and constraints for ML-driven optimization. This might sound simple, but I saw many seasoned media professionals getting confused at this step. Some do not even
understand the difference between an optimization goal and a constraint. Lead by creating clarity on this.
- Define new model features. In ML, powerful model features are far more
impactful than fancy complex algorithms. Defining model features requires domain expertise, something that you have. Many folks on the data science team come from different industries, or they are too
busy learning their craft to develop the in-depth understanding of ad tech that you have. Anything can be a potentially great feature. Your company may be sitting on a goldmine of valuable 1st party
data. You are in the best position to uncover the treasure.
- Identify flaws in the training data. ML is very good at finding patterns in data, but what if the data that you feed
it is flawed in the first place? Worst yet, if training data differs from production data, the algorithms will fail. I have many examples where, by applying their subject matter expertise, planners
found bugs in training data and helped us improve the algorithms dramatically.
- Look for anything that makes no sense. “Common sense” does not exist in the ML
world. It is a human notion. On multiple occasions, my planner colleagues were able to point to oddities in our ML systems’ performance. This feedback was invaluable in identifying flaws,
finding edge cases, and enhancing the algorithms.
- Anticipate trends and unusual events. I bet your ML system doesn’t have many Black Fridays, Super Bowls, presidential
elections, or other high impact events in its training data — as most algos rely on only 14-90 days of past performance data. However, you have lived through quite a few of these events so you
can anticipate and prepare.
- Don't forget about the creative. DSP algos match the right users with the right creative at the right time and place. However, it can only operate
within the choice of creative that you gave it. Work with the creative teams to develop as many concepts as possible — the greater the variety, the better. ML will then sort out the matching,
but it can’t develop decent creative concepts (yet).
Learn their language: acquire the basic statistics literacy
If the above list looks more like a
hybrid planner/engineer role to you, you are correct. You need to evolve into a bit of a hybrid.
A 17th-century farmer knew about seasons and crops. A 20th-century farmer also had to become a
driver, mechanic, and even a bit of a chemist. A mid-21st-century farmer will be a robotics specialist first and foremost.
As a planner, you must know a bit about
statistics and the basics of ML. This will allow you to interact with the data scientists and avoid basic pitfalls, such as not calculating statistical significance or not applying Holm-Bonferroni correction when determining the alpha levels when multiple treatments compete.
If the preceding sentence
sounds foreign to you, you have much to learn. There are many great free resources. Take advantage of them.