YouTube has been struggling for several weeks with the proliferation of disturbing videos aimed at children. A content with which he admitted having a problem. The first measures were the closing of fifty channels, the elimination of thousands of videos and the application of five points to make the platform a safer place for children. What, in the beginning, intended and aims to get a space like YouTube Kids.
Given the situation, the executive director of the service, Susan Wojcicki, wanted to address directly to users through a publication on the corporate blog. In it he announced that during the next year they will pursue the goal of having more than 10,000 human moderators review Google’s platforms, including YouTube, by addressing content that may violate their policies.
Despite automation, humans are still necessary
According to Wojcicki, “human reviewers are still essential to eliminate content and train machine learning systems, since human judgment is fundamental to making contextualized decisions.” The YouTube CEO says that since June his teams have manually reviewed almost two million videos of violent extremist themes forming their technology “to identify similar videos in the future.”
This automatic learning, at the same time, has been used to lend a hand to the humans who have trained him by assisting them in “finding and closing hundreds of accounts and hundreds of thousands of comments” during the last weeks. The system, deployed in the middle of the year, has meant that human moderators eliminate almost five times more videos than before. “Today, 98% of the videos we eliminate due to violent extremism are marked by our automatic learning algorithms,” he assured.
Thanks to the tandem formed by humans and autonomous systems, and thanks especially to the latter, according to the data handled by Google, 70% of violent extremist content is eliminated within eight hours after loading and almost half in two hours. Times, he says, continue to improve.
Due to the positive results they have obtained in this type of subjects, this machine learning has begun to train with other areas “of challenging content”, which includes the safety of childhood and hate speech.
“I’ve seen how [YouTube] has helped enlighten my children, giving them a greater and wider understanding of our world and the billions that inhabit it,” Wojcicki commented, “but I’ve also seen closely that there may be another More worrisome side of the opening of YouTube, I have seen how some bad actors are exploiting our openness to deceive, manipulate, harass or even harm “.
Facing the face, Susan Wojcicki seeks to confront this controversy that has caused them to lose advertisers, indirectly tackle other worrying problems and present Google’s video service as a safe place to battle against inappropriate content. ** The war continues *.