Commentary

Mauled By Models

True story...

A VP marketing at a global consumer financial services company has, for years, been "justifying" the ROI on marketing spend using a sophisticated marketing mix model. This model, developed painstakingly over a two-year period and refined continuously over the subsequent five, provided the foundation of his argument that marketing was providing $X in incremental profit for every dollar spent on marketing.

The model was good. Taking into account the 12-month purchase cycle, it looked at not only media mix, but changes in sales promotion programs, email, web, and mobile marketing initiatives. It also took into factored in competitive spending, brand preference scores, and customer referral likelihood. In short, he was able to use the model to "isolate" the relative contribution of marketing activities versus other things the company was doing (e.g. adding sales reps, opening new channel partnerships, etc.)

His confidence in his model was so high and his quantitative "proof" so rich that when challenged on the company's increasing commitment to marketing spend as a percentage of sales, he would regularly reply "the model doesn't lie." And for the longest time, no one would challenge the VP or the power of his model.

Than the CFO retired and a new one came in, from an industry where they hadn't used models. In one of his first meetings with the new CFO, the VP started into his marketing justification presentation, showing the steady improvement in incremental profit per $ of marketing spend, when the CFO stopped him in his tracks and asked "How does your model account for the substantial changes in consumer attitudes and behaviors in the current economic environment?"


The VP was initially stunned. No one had ever dared to question the validity of his model before. He immediately began to describe the list of variables that his model DID take into account, and rationalized that such breadth must therefore reasonably account for the economic environment. But the CFO cut him off by announcing that the model could NOT be used as the basis for marketing spend decisions unless/until it could be shown how it maintained its validity in the face of shocks to the economy. Further, the CFO stated that given the company's tight performance expectations in the next few quarters, marketing spending would likely have to be cut back.

Partially angry at the nerve of the CFO to question his authority and expertise, and partially embarrassed by his oversight of the economic factors in the model, the VP sulked back to his office and called in his head of marketing intelligence. They talked about the implications of the CFO's questions, ranging from the specifics of enhancing the models to the risk of losing all credibility gained in the process over the past few years.

My question: How would you handle this if you were in the VP marketing role? What would you do next? How would you respond?

I'll tell you what really happened in my next post.


17 comments about "Mauled By Models".
Check to receive email when comments are posted.
  1. Henry Harteveldt from Forrester Research, April 3, 2009 at 2:55 p.m.

    Sounds like the VP should also be calling his favorite recruiter to meet for a drink - or 10.

  2. Robert Zager from iconix, inc., April 3, 2009 at 3:16 p.m.

    As a reformed economist, the first thing you learned about modeling in econometrics was the Latin phrase "ceteris peribus" -- an express acknowledgement that your model was limited by the circumstances it explained. I suspect that the gurus at AIG, Lehman, Washington Mutual, Citi and the rest will tell you how well their models worked. Right up until they failed. The CFO was right to challenge the assumptions that are implicit in the model.

  3. Will Larson from Ticketmaster / Live Nation Entertainment, April 3, 2009 at 3:24 p.m.

    I would say that the model, if using time-phased sales data as an exogenous input, may account for SOME of the "consumer attitudes and behaviors in the current economic environment." In other words, a down tick in consumer spending should lower the forecast for future consumer spending. Also, something to take long-term macroeconomic trends into account, such as the Hodrick Prescott filter, might help. The output will obviously have some error, but fine-tuning should help to keep the error minimal.

  4. Jun Wong from acuwebmedia, April 3, 2009 at 4:10 p.m.

    Well I assume the next part of the story your VP of marketing is going to try and promote his agenda of increasing marketing spend.

    He is in "consumer financial services," and I think you've stated the model can factor in competitive spending.

    I wouldn't be surprised that he comes back with some numbers stating they should spend even more on marketing.

  5. Wilfredo Pena from Pace University, April 3, 2009 at 5:36 p.m.

    I really like Jody's suggestion. It sounds practical. Another day at the office. Problem=solution

  6. Howie Goldfarb from Blue Star Strategic Marketing, April 3, 2009 at 5:37 p.m.

    This is exactly why the CMO is the shortest tenured on average of the C level positions. Marketing is very hard to measure a ROI on. Every other discipline has concrete measures but since people's individual behaviors often are wild cards measurements are always generalities.

    Since I come from the Finance/Sales side the VP of Marketing came across like one of those Quants you read about in the Banking Industry. The same ones who helped cause this economic crisis. The same ones who created models proving that if you take all these sub prime loans and collateralize them together the parts are less risky than the whole.

    The fact is marketing budgets will and must reflect industry conditions as much as the individual companies condition and financial position. The VP of Marketing at GM can not justify the marketing budget it had prior to seeing sales fall 40% no matter what model they created! All the same it is not the marketing groups fault that sales fell 40%.

  7. John Grono from GAP Research, April 3, 2009 at 6:02 p.m.

    Pat, congratulations on such an insightful and thought-provoking article.

    Of course you are right - all time-series models are based on the premise that recent history is the best indicator of the what is likely to happen in the near future. Implicit in the data is some measure of changes in attitude and behaviour, as the sales data is real and reflects those changes.

    However, we are however at a juncture in time (or is it history?) when the rules of the game have changed so dramatically that 'yesterday' means very little.

    The die-hard econemtrician may say let's go back and look at the last crash or recession ... the dotcom bust of 2001? 1991? 1981? 1973?

    The confident, honest and pragmatic CMO would say that the model simply can't predict what is happening or going to happen - as can NO-ONE in such markets - and that the model needs to be shelved for a while. Further, it will need quite some period of stability for the model to be recalibrated and to regain its accuracy.

    In the meantime it's back to "seat of the pants" flying leveraging all their experience and learnings gained from the rigour of doing the modelling - a solid base. This is then stacked up against the CFOs desire to slash-and-burn. The CMO should accept that things WILL be cut, but should be able to argue with the DEPTH of the cut. The CFO could be challenged as to the likely impact of cutting that deep - what metrics and basis (apart from cost-saving and profit) do they base this on - and how do they intend to factor in the likely losses to brand equity and the need for future spending increases into the balance sheet.

  8. Scott Doniger from Wirestone, April 3, 2009 at 6:27 p.m.

    no model is "all-knowing". and this vp pretty clearly committed a series of errors. but as with many things in life, it's not what you say but how you say it. if he wasn't let go, there is ample opportunity to win the war even if this battle was lost. if he develops a strategy to articulate modifications to the existing model, broadens the marketing and measurement strategy to incorporate social feedback and other "softer" variables, and -- perhaps most important -- does more to build his case with key stakeholders prior to come-to-papa reviews, chances are he can regain both his personal stature (though as an outsider i am already suspicious and doubtful he really deserves to...) as well as re-build marketing's image as a revenue-generator rather than simply an expense.

  9. Steve Haar from Fanatically Digital, April 3, 2009 at 10:55 p.m.

    If his model takes into account behavior, which is the manifestation of attitude, by proxy, he is taking this into account. Additionally, a good model not only projects outcomes of your plan going forward, it should be used to validate the projections on a look-back basis (incorporating those things which were different than the plan). If attitude changes had a drastic impact for which he was not accounting (as inadvertent as that accounting may have been), this should have shown itself in outcomes that were significantly different than projections and not supported in the look-backs.

    True, his oversight on the attitude metrics was a big gap. But this, in and of itself, does not invalidate the model. If he had been using these models for planing and validating, he should have had an answer for the CFO right there. If he was only using it to project, and never backward validated, then he may have simply been riding the growth wave so many others were riding... nothing to do with modeling. His inability to address the question, makes me wonder how rigorous his validations really were.

  10. Steve Pike from pikemarketing, April 3, 2009 at 11:37 p.m.

    If the VPM was able to use the model over a SEVEN YEAR PERIOD then he's a hero in my book. I mean, the guy isolated key variables and "proved" their individual contribution? And he was able to convince everyone the model is airtight? That's strong.

    If the model is truly predictive then the CFO can ask all the questions he wants to. (And the VPM does sound a tad arrogant.) But until the model fails to predict outcomes I say run with it.

    Politically, Mikel Chertudi from Omniture makes an astute comment...show the CFO the model in a private meeting. Even a confident VPM should not try to publicly steamroll a sharp CFO who is new to the organization.

  11. Terence Chan from MediaBlog.com, April 4, 2009 at 3:47 a.m.

    Ahh, yet another case in thousands of 'a new broom sweeps clean' pulling the plugs of CMOs in corporate boadrooms everywhere this year.

    If this marketing 'model' had a 7 year run, there is a high probability that it did not factor in ARPU (average revenue per user) aka customer lifetime value, as its fundamental bullseye for existence. ARPU cares not for economic cycles, whether disruptive or greed inducing. It just provides the gravity for everything to fall into line given whatever you have, or don't have.

    Having been a co-conspirator and evil scientist in developing 'models' of all shapes and manner as I cackled my way across 20 years of marketing communications - I am incredibly humbled to one emerging fact - Models are to marketers are what disgusting green purple boiling cure-all pots and shrunken head totems are to Shamans.

    Granted that models DO make a good lie-ly-hood for everyone prancing around the camp fire chanting ROI, ROI, ROI... provided the music doesn't stop.

    Pardon the sarcasmodel, I'll just shut up now and look forward to your answer :)

  12. Hugh Seaton from Seaton Consulting, April 4, 2009 at 4:57 a.m.

    Sorry but marketing mix models usually assume better quality data than is available. Their purported accuracy don't allow for Nielsen's (or Starch's) levels of inaccuracy. They aren't without value, but are alot less predictive than evidently this VP thought.

  13. Terence Chan from MediaBlog.com, April 5, 2009 at 1:51 p.m.

    oh, and what modeler factored Bernie Madoff and Lehman Brothers?????

    The Disneyland model.

  14. Dan Woodard, April 7, 2009 at 9:20 a.m.

    A model needs to take into account the message not simply the media delivery. If this guy had been spinning his model for 7 years, there had to have been different messages aired over all those years. So, if the current message connects with the consumer in these difficult times, then I’d say using the model for a media determination is just fine. If on the other hand the message doesn’t connect with the consumer, then the media recommendation is pretty worthless.
    Bottom line consumer sentiment/environment is much more about the message then the media dollars.

  15. Paolo Gaudiano from Infomous, Inc., April 7, 2009 at 3:13 p.m.

    As mentioned by others here, any model based purely on the extraction of trends from historical data (i.e., statistics/regression) is, by construction, going to fail if there are unexpected, dramatic changes. Today's world is changing so rapidly that statistical models simply do not work. This should not come as a surprise: statistics was invented over 200 years ago to address the need to manage large data sets with paper and pencil. Using the computer to increase the number of variables and the amount of data may increase the accuracy with which the past is replicated, but still provides no assurance of accurate prediction of future events. However, there are new classes of models that can capture causality by simulating the behavior and decision-making processes of individual consumers - so called "agent-based simulations." These models are successfully solving tough marketing problems, even in the presence of "discontinuous events" like new product launches, competitive activities, and economic instabilities. The VP should do some research to find out about these and other new techniques to see if they can address satisfactorily the CFO's concerns. Insisting on adding features to the steamboat while someone else is building airplanes is not a sound business strategy and is unlikely to impress any CFO.

  16. Rajeev Jain from GSD&M Idea City, April 8, 2009 at 7:13 p.m.

    First of all, If I were this VP, I would not be sitting pretty for 7 years on a single model. Models produce results from data and assumptions, and these should indeed change from time to time - as Jody Wright also pointed out early on in this comment stream.

    The model was refined over 5 years after the 2 years it took to develop, but we have not been told how it was refined. I was also confused by CEO asking about "consumer attitudes and behavior", but CMO feeling embarrassed about "not having put in economic factors." One can't really comment technically on this modeling "situation" without these details being filled.

    Also often forgotten in econometric models is the difference between causality and correlation. Paolo from Icosystem refers to simulation-based modeling approaches that if constructed and used properly can give useful causal insights but require a lot more work and thought than this VP was willing to put even in his relatively simple statistical models.

    Interesting article. Can't wait to hear how it ended.

  17. Carol Williams from Media Dynamics Inc, May 6, 2009 at 12:35 p.m.

    Having been involved in many such statistical modeling exercises in our role as consultants, I can testify to the fact that the researchers have over sold their predictive values to management. Most of these models are seriously challenged as to the availability of relevant or accurate data and even when they could add more sophisticated elements the modelers are reluctant to do so because they know that the model, itself, was not configured to use the information properly. A recent case involved a major consumer goods marketer who was using a supposedly predictive model that relied almost entirely on media ad dollars and sales data. In effect, this company was trying to come up with a simplistic " share of voice" formula that would be applied to all of its brands. But something was wrong. The model's results bounced all over the place when looked at on a brand by brand basis.

    In our capacity as consultants, we asked questions that the company's own management should have posed. Why media dollars, instead of audience delivery by target group? ( The answer was that collecting such data across a ten year period was too much trouble for the agencies ). What about the varying impact of changing brand ad campaigns? ( The answer was that these were too variable in nature and hard to define.) What about the activities of rival brands? ( The answer was that this was too complicated to incorporate in the design. ). And so it went.

    When the researchers try to sell their models to management, the latter often think that the computers are performing some kind of magic---analyzing a myriad of variables simultaneously and finding amazing insights. In reality, most models, while utilizing a number of inputs to set parameters for their analysis, employ a single quantitative yardstick--to find their "solutions". In short, they don't look at everything---as is supposed---because this is impractical. ROI and similar models can have great directional value---if utilized sensibly--- but they can't provide the "correct way" for all brands to deal with marketing or media issues. That's asking much too much.

Next story loading loading..