Commentary

Models Are People Too

Ever have one of those moments when it feels like time is slowing down? I had one yesterday when I asked a group of big advertisers what I thought was a simple question. It was followed by a long, awkward pause, a bit of stammering, and then some answers. They just didn’t happen to be answers to the question I asked.

The question was posed in response to a comment by John Walthour, director of consumer insights at consumer packaged goods giant General Mills, who said, “Models don’t make decisions, people do.”

The comment came midway through a panel in which Walthour and other big marketers discussed the how marketing mix models are contributing to their business results, and the consensus seemed to be that they while they are an important tool, they are just another resource for the marketing teams to consider in making their strategic and gut decisions. Given what we cover in RTM Daily -- you know, the increasing role automation and machine learning are playing in actual decision-making -- I followed up by pointing out, “Yes, but people make decisions to use models,” and asked if each of the panelists could give me a ballpark estimate for how much of their current decision-making was done by machines, and how much it would be three years from now. And that’s when time slowed down, and I had one of those, “Why do I ask these questions” moments.

What surprised me was not the flummoxed response. After all, these were marketing execs with the term “insights” in their title, so if a machine could do what they could do, well… What surprised me was the response of the people in the audience. First it was the people sitting at my table who I could hear tittering. Then at the break, they said, “Great question, terrible answers.” Then others at the the day-long forum said the same thing to me, and at least one cited it on stage during a breakout session.

Granted, most of them were not marketers, per se, but people who either supply media, data, technology or models to them, but it was like one of those the emperor has no clothes moments. The problem is, I didn’t ask the question to uncover that fact. I really wanted to know how automation and machine learning was impacting the decisions of big marketers, and where they thought it was heading.

After all, the whole point of ARF forum was to understand what role mix models are currently playing in influencing their marketing and media decisions, and to figure out how they need to evolve to deal with a “multi-screen” media world -- especially one where “Big” and granular data alike are streaming at them at megabits per nanosecond. You know, the volume and speed of information that machines are better at parsing than humans can.

Heck, isn’t that the reason the original mix models were created back in the early 90s. I sort of remember talking to Ed Dittus, founder of Media Marketing Assessment (MMA), who pioneered the field back then, when the data wasn’t so big or fast. HIs point then was that machines were better at processing all that data than people, and I’m guessing the need has only grown greater as the volume and complexity of data signals has grown exponentially in recent years.

In retrospect, I think I may have asked the wrong question. Maybe I shouldn’t have asked them how much they rely on machines to help them parse information to make their decisions -- now, or in three years time. Maybe I should have asked them how they plan to come up with actionable consumer insights from all that data without becoming more reliant on machines.

I would really like to know.

Image courtesy of Shutterstock.

5 comments about "Models Are People Too".
Check to receive email when comments are posted.
  1. Bill Harvey from TRA, Inc., October 30, 2013 at 5:10 p.m.

    Joe it was a great question - i think it would have helped them answer if you mentioned "programatic buying" - they would have said that those granular decisions might be trusted to models under certain conditions - i hope i am in the room again the next time you ask :-) Best, Bill

  2. Pete Austin from Fresh Relevance, October 31, 2013 at 6:21 a.m.

    Answer is zero. 100% of decisions are made by people. The fact that these people use increasingly advanced tools to help - e.g. newspapers, research, internet, automation scripts - is irrelevant Here's an example to clarify the issue: when this Web page renders, the browser makes literally hundreds of choices about how to render the words - but the only *decisions* about whether to read the words are made by you, the reader. The issue is confused by how computer programmers use the word "decision" to mean something that's not a decision at all, but a predetermined choice.

  3. Pete Austin from Fresh Relevance, October 31, 2013 at 6:30 a.m.

    Also, we've been reliant on machines for two million years, so it's a bit late to worry now. http://news.discovery.com/history/human-ancestors-oral-hygiene-the-toothpick-131008.htm.

  4. Joe Mandese from MediaPost Inc., October 31, 2013 at 7:25 a.m.

    @Pete Austin: Re. point one, I don't necessarily agree with your example about a browser rendering a page based on the user's decisions. Some of the decisions that load on the page are determined by machine logic, including elements programmed by the publisher, but especially ones programmed by advertisers. Increasingly, the ads called on that page are served based on audience attributes determined by a machine (ie. programmatic buys, retargeting, etc.). Re. the second point, I'm not suggesting we worry about it, I'm just trying to understand how human decision-making is being augmented by machine decisions. My sense is that over the past two million years, people have become more reliant on machines for decisions. It seems to me that this process is accelerating.

    @Bill Harvey: You are absolutely right that I failed to fame the question properly, but I was asking about things beyond programmatic buying. When I heard Brad Smallwood of Facebook talking about the need to begin factoring all the new "granular" data being generated by users on Facebook, Twitter, etc., it struck me that that's also too much information for people to process. Maybe I don't understand the logic of machine learning, but I thought it was to routinize decisions so that people don't need to make them. Ideally, they are low-level decisions, but sometimes -- like auto-pilots flying jumbo jets with hundreds of souls on board, or Google cars driving themselves with just a couple on board -- they are pretty high-level too.

  5. Rolf Torvar from TC Consulting, November 1, 2013 at 12:48 p.m.

    Dear Sir, Perhaps you created confusion with your question. You asked the ARF marketers "how much of their current decision-making was done by machines."

    Later, in your article you complain, " Maybe I shouldn’t have asked them how much they rely on machines to help them parse information to make their decisions..." But you did not ask them "how much do you rely on machines to help parse information?" You asked them: "how much of their current decision-making was done by machines." It sounds like you got a straight answer and the confusion (or slowing down to time) is all yours. Unless you are asking about programmatic buying by an ad server, then machines do not make decisions. Human beings make decisions based on analytics coming from machines.

    Your original question as you posed it is like me asking, "What machine decided you were going to the ARF meeting?" You of course would respond that you decided that you will attend the ARF, and not a machine. If I were to turn around and complain that you did not indeed make a decision because accountants at MediaPost use Quickbooks to set budgets, and that a machine decided if you have enough budget to make the trip to this ARF meeting, then I would be the one slowing down time by being pedantic.

Next story loading loading..