YouTube, which has come under fire for posting videos aimed at children with violent and sexual themes, said it will dramatically increase the number of people overseeing content in 2018.
In a blog post late Monday, CEO Susan Wojcicki said YouTube will increase the number of people working to oversee content to more than 10,000 next year. “Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” she said.
Google-owned YouTube has had a rough patch in late 2017, with advertisers pulling away from the service after news reports showed child predators using videos of young children as de facto chat rooms and after outcry over YouTube creators using children’s characters like Elsa from Frozen or Nickelodeon’s Peppa Pig and splicing in non-kid friendly language and themes.
YouTube receives 400 hours of new videos per minute, and the current system uses machine learning to flag videos that should be age restricted. But critics say YouTube gets so many submissions, it’s easy to fool the artificial intelligence.
The issue YouTube faces is echoed around Silicon Valley, where self-service networks largely controlled by sophisticated algorithms keep profit margins high but are vulnerable to exploitation. Facebook has said it would increase the number of human reviewers after it revealed the extent to which a Russian entity bought ads using its automated ad network to manipulate U.S. voters.
James Steyer, the CEO of non-profit Common Sense, which works with parents and educators on tech issues, says an algorithm solution for “kids is not sufficient.” Policing the out-of-control videos on YouTube requires “the hiring of humans.”
By expanding its reviewers by 25%, YouTube is tacitly admitting the computers can’t do it all. But it’s still focusing on the eventuality that they’ll catch up.
Wojcicki said in recent weeks YouTube has used machine learning to help human reviewers “find and terminate hundreds of accounts and shut down hundreds of thousands of comments.”
YouTube has an age restriction to use the service of 13 years of age, but many parents and kids get around this by using their parent’s accounts. Additionally, four of the top five most popular channels in the U.S. on YouTube are children’s related, according to measurement service TubularLabs.
In her blog post, Wojcicki talked about creators using YouTube’s platform to create great work, but also the downside. “I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”
She noted how YouTube has been fighting the rogue creators by testing new systems to combat emerging and evolving threats, tightened policies on what content can appear on the platform and demonetized videos, thus taking away earnings potential for creators, who split ad revenues with YouTube 55-45%.
“Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”
YouTube also employs 100 “Trusted Flaggers” globally who work with the company to identify videos that shouldn’t appear on the service.
Wojcicki said some 150,000 videos have been removed for violent extremism and that advances in machine learning lets YouTube take down nearly 70% of violent extremist content within 8 hours of upload.
“Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.”
Additionally, she said YouTube would take more action to keep advertisers away from inappropriate content.
YouTube will apply stricter criteria to what videos can be monetized, conduct more manual curation and significantly ramp up its team of ad reviewers to ensure ads “are only running where they should…it’s important we get this right for both advertisers and creators, and over the next few weeks, we’ll be speaking with both to hone this approach.”