Facebook Study: That's A Nice ROI You've Got There, It Would Be A Shame If Anything Happened To It


New academic research conducted by Northwestern University and Facebook has concluded that the most common methods of online advertising measurement used by advertisers and agencies may not be as accurate as the kind of “large-scale, randomized experiments” that can only be conducted via -- pause for effect -- walled garden platforms like Facebook.

The study, which was based on data from 15 ad experiments conducted on Facebook, observed half a billion users and found its so-called “conversion lift” method is superior to conventional means of digital ad measurement.

“Our findings suggest that commonly used observational approaches that rely on data usually available to advertisers often fail to accurately measure the true effect of advertising,” the researchers said in an article published in the March edition of the INFORMS Marketing Science journal.

advertisement

advertisement

Some marketers and agencies might see  the findings as ironic, given that they have long complained that the data they need to accurately measure the true effect of their advertising historically has not been made available by Facebook, Google and other digital giants, which are referred to as “walled gardens,” because they do not make that data available to advertisers.

“Digital platforms that carry advertising, such as Facebook, have created comprehensive means to assess ad effectiveness, using granular data that link ad exposures, clicks, page visit, online purchases and even offline purchases,” the researchers ironically asserted, adding, “Still, even with these data, measuring the causal effect of advertising requires the proper experimentation platform.”

4 comments about "Facebook Study: That's A Nice ROI You've Got There, It Would Be A Shame If Anything Happened To It".
Check to receive email when comments are posted.
  1. Jack Wakshlag from Media Strategy, Research & Analytics, March 6, 2019 at 11:19 a.m.

    Experimental designs have always been the cleanest way to demonstrate causality, but holding out a group to be the "control" group means a subset of the target are not delivered the message.  Given the size and scale of these audiences, the experiment can be done with little downside.  It's what's done in A/B testing of websites and what is done in most hard sciences.  Perhaps if we did more of these we would stop asking the same questions we asked 20 years ago.

  2. Ed Papazian from Media Dynamics Inc, March 6, 2019 at 12:37 p.m.

    When the findings "suggest" something this raises alarm bells. Did the researchers actually compare the "lifts" using their method versus the "conventional" practice against a control group and prove that their method was superior----meaning more accurate? Were actual sales or some other definitive indicators employed to show that one method was better than the other?

  3. Jack Wakshlag from Media Strategy, Research & Analytics replied, March 6, 2019 at 12:58 p.m.

    Ed, it seems to me the answer is yes, they had Facebook conventional method vs experimental design. Data was online change in website clicks or referral traffic.  The new method is not new to you I’m sure. It’s experimental vs control group. Joe posted the actual article on the study separately from this article. 

  4. Ed Papazian from Media Dynamics Inc, March 6, 2019 at 1:55 p.m.

    Thanks, Jack, I'll take a look at the study report.

Next story loading loading..