YouTube is facing a fresh brand safety crisis as a number of brands have suspended advertising on the platform after it emerged their ads were run against content showing exploitation of young children. Brands including O2, Which? and Dropbox have suspended advertising on YouTube after The Times and The Sun flagged a series of ad enabled videos showing children in pain and distress. YouTube responded by removing one of the offending channels, and removing or demonetising the videos highlighted by the two newspapers.
YouTube has spent the year recovering from previous brand safety concerns, where advertisers found their ads were playing alongside extremist content, leading around 250 companies to stop advertising on the platform. Many have since returned, convinced by YouTube’s efforts to tighten its content filtering, but now doubt is being cast again on its ability to control its content as large channels featuring monetised disturbing content involving children have been pulled into the spotlight.
The Times singled out a channel called Toy Freaks which has run since 2011 and drawn over seven billion views where a man posted videos of pranks he pulled on his two young daughters, causing them pain and distress, as well as bizarre footage of him and his daughters crawling around and spitting liquid on each other. The Sun described some of the videos, which have since been taken down, where the man filmed his daughters screaming in distress at frogs and snakes being placed in a bathtub with them, and crying after being spoonfed baby food. YouTube analytics specialist SocialBlade estimates the channel could have earned anywhere between £544k to £8.7m per year.
Belinda Winder, a forensic psychologist and head of the sexual offences unit at Nottingham Trent University told Times that some of this content is designed to appeal both to children and to adults “who are not simply paedophilic but who also suffer from sexual fetishes involving pain and abuse”.
YouTube has now taken down the Toy Freaks channel and blocked or demonetised similar disturbing content on other channels, and says it will redouble its efforts to filter out and remove this type of content. “We take child safety extremely seriously and have clear policies against child endangerment” it said in an official statement. “We recently tightened the enforcement of these policies to tackle content featuring minors where we receive signals that cause concern. It’s not always clear that the uploader of the content intends to break our rules, but we may still remove their videos to help protect viewers, uploaders and children. We’ve terminated the Toy Freaks channel for violation of our policies. We will be conducting a broader review of associated content in conjunction with expert Trusted Flaggers.”
Youtube’s child safety standards had already been under scrutiny prior to the new revelations, due to channels effectively gaming its algorithms to show inappropriate content to children. A trend is emerging of channels loading their videos with popular tags and keywords aimed at children, making these videos likely to features in children’s searches and to be shown to children thanks to YouTube’s autoplay feature. These videos are at their least harmless just bizarre, AI generated videos featuring kids’ characters, but at their worst they’re clear malicious attempts to show violent or sexual content to children. Of particular concern to brands is how some of these channels ended up on Google Preferred; Google preferred doesn’t promise brand safety, but advertisers have described feeling misled by Google over the value of the product nonetheless.