One of online video’s major challenges is still the question of how – or perhaps, whether – the industry can monetise the mid to long tail of video inventory. Brand safety is one major concern – particularly when it comes to user-generated content (UGC), but another is the question of how you organise and bundle the inventory so it can be sold to advertisers. Over the last couple of years, a number of companies have sprung up to solve both of these problems, one of which is WeSee, a London-based technology company.
Sam Kayum is Chief Commercial Officer at WeSee, but prior to joining the company he gained considerable experience in video advertising as both Managing Director for Smartclip UK and European Managing Director for Lycos. Now Kayum says the video advertising industry is leaving money on the table. “The IAB’s figures say £130 million was spent on video last year, but it could easily be more than that if there was more premium video out there. When I was at Smartclip I know I could have sold my premium content twice over. The demand is there from the agencies, especially from the likes of Group M who have moved 3 percent of their broadcast budget to video, and there are others following suit,” he says.
“There are five things we need to put in place before we monetise the long tail,” adds Kayum. “Relevance, accuracy, cost, performance and safety, and until they’re fully addressed it’s going to be difficult for advertisers to move budget into that particular area. I believe that this is where technologies like ours can bring things like user-generated content and social video – which previously haven’t been monetised very well – into the market.”
However, organising UGC in such a way that it fits with the IAB’s categories – or the ‘IAB Contextual Taxonomy’ to be precise – is no easy task. It was a little easier with display advertising as you could usually work out the context by looking at the text in the body of the article.
But cracking video is more difficult, as Kayum explains, “Our technology enables us to recognise and break apart images so that we can determine the composition of those images. While we can also look at the text around the image for context too, our key USP is that we don’t have to use the text and metadata to use and classify the composition of an image. Once the image has been broken down, the content is indexed and converted into keywords, which can then used to either create categories for targeting or flag up any salacious material that might be regarded as not being brand safe.”
But how accurate is the technology? “It’s about 75 to 80 percent accurate,” says Kayum. “Which is about 75 to 80 percent more accurate than anything else out there,” he jokes.