Our musings on making video personal
The following article about video advertising measurement was originally published on Adweek as a guest contribution by me, but if you can’t get past the paywall then enjoy the read below!
For today’s modern marketer, it’s never been a more exciting time to work with video. Advances in underlying data (the “who”) and advertising inventory management (the “where”) make it easier than ever before to marry consumer desire with the elegant beauty conveyed by sight, sound and motion. Personalization extends video’s capabilities even further, delivering uniquely tailored experiences to each individual.
And yet, despite 20 years of advances in digital measurement and multi-touch attribution, we all too often find that a ‘view’ (AKA a gross impression) is the common currency of the advertising community, followed closely by the click, to evaluate the impact of video.
My friends, if you only care about views, then there are likely more efficient ways to achieve them, such as outdoor advertising close to highways or the more ephemeral skywriting.
Simply stated, video views are a vanity metric, and should be treated as such, like followers bought on Instagram or Facebook. That’s why we advise our customers to look beyond video views–and in our case, personalized video–toward revenue impact such as return on ad spend (ROAS) or the incremental lift in revenue generated over and above a group unexposed to video.
If you are already prepared for more advanced attribution, don’t wait for multi-touch attribution to catch up to you. Instead, create your own control/exposed split test to measure efficacy of your video programs today:
By properly setting up A/B tests, you are able to single out video as a sole difference between one audience and another, which creates a vacuum that allows you to see the direct impact of said video. It’s important to note that the details are extremely important when it comes to setting up a proper test for this, though. For example, if you are testing the effectiveness of a video ad in influencing customers to make a purchase compared to a traditional display ad, then you need to ensure both ads contain the same information, or content. If the video ad contains different content, such as high-level brand messaging, than the display ad–that simply shows a product and its price–then you are already soiling the validity of the test. Ensuring details like this are consistent is essential to maintaining the validity of your test because if there are multiple variables then it becomes difficult to attribute the results to the video.
Other than airtight testing environments, it’s important to make sure your control and test audiences are both statistically significant; it’s also critical that your test doesn’t overstay its welcome. Running a test for too long introduces bias, but running it for too short of a time doesn’t allow it to reveal accurate results due to “The Novelty Effect.” The Novelty Effect says that when a feature is new, it generates much more excitement and engagement than it actually will in the long run once the novelty of it has worn off. In other words, you want to ensure your test length is long enough to reveal accurate results and your test audience is big enough to bury bias, while also not allowing either to go too long or get too big.
Nonetheless, if you’re not ready to take the plunge all the way to revenue, then consider perhaps the most under-appreciated metric that matters (and measures attention): time spent. Time is our most precious commodity, and exposure over time builds favorability and brand preference. Perhaps most surprisingly, video generally trounces display advertising on an efficiency basis relative to time, especially if you use enhanced approaches like personalized video, which extend the potential value of time spent by the consumer by offering timely, relevant and useful information personalized to him or her. The current Media Rating Council guidelines for display advertising effectively state that an ad must be in-view for one continuous second, post render.
If you buy media at a $5 cost per thousand (CPM) for a (questionable) minimum of one second of exposure time, versus a $15 CPM for video that’s viewed for more than three seconds, video delivers more efficient time spent. While we’re not suggesting the creation of 3-second video ads per se, we are advocating for a reevaluation of time as a more consistent currency to evaluate the effectiveness of each visual advertising medium.
As video advertising continues to evolve, the process behind tying it to real business value, such as return on ad spend, will continue to get easier and easier. The process of organizing data, in the current state of things, may be long and tedious, but the payoff is worth it. When testing video ads, don’t rely on vanity metrics alone to determine success for you. Instead, start measuring the success of your video advertisements with metrics that tell you important stories about the attention and intention of your customers.