With our modern, connected world rapidly changing, how do we keep up with it?
There are two major ways. The obvious one is that we have to be observant of the changes and try to adapt ourselves to them. The next is that we can use goals to help us understand how change affects our brands and our audience. It’s nice to know that a sharing economy has arisen, but how exactly does that affect our brand? We learn that by forming hypotheses about our projects and measuring them as they move along.
To do this, we have to embrace a concept called “variance.” Variance indicates how close any given effort comes to a goal. If we set a goal of shipping 2,000 units and we miss it by 10, our variance is only -.5%. Even though we missed our goal, it would be silly to criticize ourselves for such a slight lack of performance. On the other hand, if we exceed our goal by 200, our variance now becomes +10%. That’s big and possibly provides a learning opportunity.
Let’s say you have a chain of restaurants with an overall business goal of increasing sales. You look at your numbers, compare them to industry data, and realize that your midweek lunch sales are lagging behind the averages. This tells you that lunch is an area where you have a good opportunity to achieve lift, or improvement.
You also know that your most loyal customers are your best bet for increasing sales. It turns out that most of them have downloaded and installed an ordering app that your team created a year ago. And so, you create an in-app promotion that offers a free order of French fries for anyone who stops by for lunch Monday through Friday.
Next, you need to figure out what success would look like. Luckily, you’ve been doing in-app promotions for a year and have plenty of data. Based on a review of that data, you conclude that you can probably drive 1,000 extra lunches.
You then run the promotion. For the purposes of argument, let’s say that 1,200 people take advantage of it. Congratulations, you did well. But remember, we weren’t simply evaluating performance on this effort. We also wanted to learn. So you look at the variance. In this case, it’s +20%. This is significant and you should figure out why.
To do this, you first need to make sure that your initial estimation was not too low. Look at the images, copy, and other creative elements to see if you find anything significantly different from past promotions. In this case, you realize that the pushed headline “Free fries” drove a much higher open rate than previous promotions. The learning? Your customers like simple messages that tout free items. You’ll want to confirm that with tests in the future, but now you’re at least working toward a better understanding of your customers.
On the other hand, let’s suppose you only had 960 lunches. This is a variance of -4%. Do you fire the team for not living up to our expectations? Of course not. You didn’t miss by much, and you don’t want them to think of your measurement program as a Sword of Damocles hanging over their heads. Instead, you want to determine if something caused the miss or if it was simply noise. In this case, you notice that you dropped most of the promotions on a Sunday evening. You then look at similar promos sent out on Sunday evening and realize that anything sent at that time underperforms expectations. Again, you’ve learned something.
We cannot stress the concept of variance enough. To figure out what works for your brand, you need a measuring stick, and it should be as accurate and useful as possible. You can’t simply rely on available metrics that seem to show success.
For example, a few years ago, our agency did an online video for Samsung about a young guy who doesn’t realize that his stuffed dog isn’t real. He takes the dog to the park, introduces it to strangers and neighbors, and even lets it play with another dog that tries to have sex with it. The underlying concept was that a real premium monitor like Samsung’s always beats a fake one.
The video blew up, getting more than 2 million views. That may seem like a massive success, but what if our target expectation was 5 million? Then we failed pretty miserably. Or what if we only expected 1,000? In that case, we crushed it. In fact, as you’ve probably guessed, our major criteria on the project had more to do with pushing sales than getting eyeballs—the eyeballs were a bonus. Sales did improve, but nothing like the gaudy view total would suggest. In fact, as you get into viral-crazy territory, the quality of your audience inevitably declines. You may affect your brand positively, and that’s always a good thing, but your sales will not rise in lockstep with the variance in your views.