New academic research conducted by Northwestern University and Facebook has concluded that the most common methods of online advertising measurement used by advertisers and agencies may not be as accurate as the kind of “large-scale, randomized experiments” that can only be conducted via -- pause for effect -- walled garden …
Experimental designs have always been the cleanest way to demonstrate causality, but holding out a group to be the "control" group means a subset of the target are not delivered the message. Given the size and scale of these audiences, the experiment can be done with little downside. It's what's done in A/B testing of websites and what is done in most hard sciences. Perhaps if we did more of these we would stop asking the same questions we asked 20 years ago.
When the findings "suggest" something this raises alarm bells. Did the researchers actually compare the "lifts" using their method versus the "conventional" practice against a control group and prove that their method was superior----meaning more accurate? Were actual sales or some other definitive indicators employed to show that one method was better than the other?
Ed, it seems to me the answer is yes, they had Facebook conventional method vs experimental design. Data was online change in website clicks or referral traffic. The new method is not new to you I’m sure. It’s experimental vs control group. Joe posted the actual article on the study separately from this article.
Thanks, Jack, I'll take a look at the study report.