Online video’s explosive growth has thrown up major brand safety challenges for our industry, and it’s something that affects everyone in the advertising food chain, to the point where brand safety should really be thought of as ‘industry safety’. Here Adrian Moxley, Co-founder and Chief Visionary Officer at WeSEE, explains how more could be done by both publishers and advertisers to protect brands from appearing alongside inappropriate and offensive video content.
The web, as we know it today, is a highly visual, social and dynamic environment thanks to an incredible period of rapid technological development. When YouTube launched just a decade ago, Twitter was no more than a scribble on a napkin, while Instagram and Pinterest would not see the light of day for five more years. At this point in time, a world filled with Snapchats and Vines would have been beyond our wildest dreams.
The success of these platforms is mainly down to the fact that large audiences naturally tend to congregate around social and visual content. According to eMarketer’s Q2 2015 State of Video report, viewers are spending more time watching digital video than ever before – an average 16 minutes per day.
From an advertiser perspective, this is like gold dust – offering a creative opportunity to embrace and engage with these audiences.
And for publishers, well, they can make the most of it by monetising their properties in the right way and providing the right visual environment for advertisers.
But with so much content being uploaded on a daily basis – 300 hours worth of video on YouTube alone, every minute – how can publishers and advertisers keep track of all of it? And what happens when mobile video live streaming becomes the next consumer phenomenon? Who will manage and curate all this new visual data?
Despite the increasingly visual nature of the internet, many companies still use outdated contextual data to validate a picture or video based on the text around it, or they rely on manual moderation and intervention. This may work for a small volume of video content within a controlled environment, but apply this to lots of user-generated content (UGC), which is what more and more brands are investing heavily in, and there’s a far higher risk of inappropriate or offensive content slipping through the cracks.
Brand Equity is Fragile
You only have to look to the world’s largest advertiser Procter & Gamble, whose Swiffer product ended up as the pre-roll for ISIS propaganda videos posted on YouTube, as did an ad featuring Jennifer Aniston promoting Aveeno skin cream, and, in a similar incident, Bud Light’s ‘Up For Whatever’ ads.
All of these cases were resolved fairly quickly. However, it was still a huge embarassment not only for the video publisher, but also an even bigger one for the brands, which have spent years building up a reputation only for it to potentially be crushed within seconds.
Most verification tools rely on the user or metadata to moderate content as opposed to analysing the actual content being viewed, making it difficult to validate user generated content as safe with sufficient accuracy. These same parameters are used across premium content. This is already an issue, as it’s a different metric for UGC. More focus on the actual content a user lands on should be prioritised.
The good news is that technology that can ‘understand’ and classify video content as well as automate and improve the whole process of placing video ads do exist. Introducing an automated solution on upload that incorporates both visual recognition technology and brand safety criteria provides a comprehensive understanding of the visual content. This removes the threat of an unsafe image being uploaded before being spotted by a moderator, the web audience or the brand manager. It’s not only a safer solution, but can also be a more efficient one if the site has scale, as there is no need to bring in teams of human moderators.
Publishers need to build visual classification tools into their online properties. These can then be utilised by brand advertisers to place highly targeted ads via an exchange and, in essence, create new additional inventory gained from the theme and content of the video.
Automating this process is vital and neural networks have become a key tool in facilitating image recognition because it ensures that ads are targeted effectively and in the fastest and most accurate way – steering well clear of inappropriate or offensive content. Interpreting UGC content in this way is a win-win situation for everyone – brands are more likely to achieve direct engagements and publishers will ultimately increase their revenues.
At the end of the day, it’s your brand’s reputation on the line – whether you’re a publisher or an advertiser. Effectively monitoring video content protects this and creates a safe, stable and highly effective platform for advertisers.