The jury is in. According to recent Multiscreen Video Advertising Report, 89% of agencies and 96% of media companies expect to run multiscreen
campaigns this year. As the number of devices and the complexity of ad executions increase, it’s increasingly challenging—and vital—to ensure consumers can effectively engage with
your content on any screen.
To keep up with the speed and intricacy of today’s campaigns, our approach to quality assurance testing (QA) needs to evolve.
My team at Mixpo and I have run hundreds of multiscreen campaigns. From these campaigns, we’ve identified the following five best practices for delivering exceptional user experiences—and outstanding business results—across any and every device:
1. Your launch plan is gospel
The kickoff call and schedule are just as vital for a multiscreen campaign as they are for any other. Get organized and share the launch plan with all parties involved. If the plan changes, communicate schedule updates quickly so everyone’s on the same page. As launch approaches, testing is often the first thing cut to meet a deadline. Stand Your Ground and give yourself time! Good QA saves time and headaches later.
2. Define and document expected behavior during the creative process
The creative development process isn’t just about look and feel anymore. With so many different ways
to initiate an ad, enable video and engage with interactive features, it’s important to define functionality upfront. If your mobile ad will launch a video in the native player, make sure the
client understands what that means and how it’s different from in-line video. Confirm that navigation is “tap” versus “swipe.” Taking an extra minute to verify and
document behaviors will set proper expectations and provide better inputs for testing.
3. Standardize your test matrix
The good news is that multiscreen creative tools do the heavy lifting when it comes to ensuring that the creative “vision” works seamlessly across every combination of device and environment. Even so, in ad ops, we often push the limits of what’s technically possible and need to QA cross-device at the campaign level. It’s important to have a test suite and test plan that covers a representative set of devices, browsers and operating systems. Make sure your matrix is communicated to clients up front and includes device combinations that are specifically important to the campaign at hand. We suggest testing in the agreed upon QA matrix at the creative approval stage and again before launch.
4. Make sure it works as expected
Remember that we covered expected behavior with the client? Working and working right are two different things, and no one likes surprises right before launch. Confirm that the ad is functioning as expected on each device by referencing the documentation developed in Step 2. This means, instead of using “simulated” mobile previews that can be viewed on desktop, provide preview links that can be viewed on actual devices. In many cases, emulators don’t accurately represent a live environment or don’t show how the ad will behave during the campaign. Even if it feels more laborious, clients and publishers need to sign off on a live preview link.
5. Mandatory 24-hour check in
Campaign is live! Within 24 hours, and again within a week, review all reports. Reporting discrepancies are the first signal that there could be an issue — even as simple as a missing tracker. Remember, you can’t go back in time and recapture that data! Catch inconsistencies early and you’ll have the opportunity to course correct with minimal impact.
A little bit of extra upfront time and investment is well worth it. Multiscreen doesn’t have to be hard – it just takes organization. Pair solid QA with great ad technology, distinctive creative and comprehensive reporting and you have a recipe for success.